Wednesday, May 22, 2013

The limits of empathy

I thought I would follow up the Monday's post on well being, kindness, happiness and all that good stuff by noting a piece on how feel-good energy can lead us astray. Yale psychologist Paul Bloom has done an excellent article in the May 20 issue of the The New Yorker titled “The baby in the well - the limits of empathy.” Well meant feelings and actions of empathy can in some cases be counterproductive and blind us to more remote but statistically much more important hardships. Our evolved ability to feel what others are feeling (see numerous mindblog posts on mirror neurons, etc. ) is applied to very explicit and limited human situations, usually a specific individual (6 year old girl falls in well and nation focuses on watching the rescue) or defined and limited groups (mass shootings at Sandy Hook or Boston Marathon bombing). From Bloom:
In the past three decades, there were some sixty mass shootings, causing about five hundred deaths; that is, about one-tenth of one per cent of the homicides in America. But mass murders get splashed onto television screens, newspaper headlines, and the Web; the biggest ones settle into our collective memory —Columbine, Virginia Tech, Aurora, Sandy Hook. The 99.9 per cent of other homicides are, unless the victim is someone you’ve heard of, mere background noise.
After noting how empathy research is thriving, and several books arguing that more empathy has to be a good thing (with Rifkin, in “The Empathic Civilization” (Penguin), wanting us to make the leap to “global empathic consciousness”), Bloom notes:
This enthusiasm may be misplaced, however. Empathy has some unfortunate features—it is parochial, narrow-minded, and innumerate. We’re often at our best when we’re smart enough not to rely on it......the key to engaging empathy is what has been called “the identifiable victim effect.” As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.”
You can see the effect in the lab. The psychologists Tehila Kogut and Ilana Ritov asked some subjects how much money they would give to help develop a drug that would save the life of one child, and asked others how much they would give to save eight children. The answers were about the same. But when Kogut and Ritov told a third group a child’s name and age, and showed her picture, the donations shot up—now there were far more to the one than to the eight.
In the broader context of humanitarianism, as critics like Linda Polman have pointed out, the empathetic reflex can lead us astray. When the perpetrators of violence profit from aid—as in the “taxes” that warlords often demand from international relief agencies—they are actually given an incentive to commit further atrocities.
A “politics of empathy” doesn’t provide much clarity in the public sphere, either. Typically, political disputes involve a disagreement over whom we should empathize with. Liberals argue for gun control, for example, by focussing on the victims of gun violence; conservatives point to the unarmed victims of crime, defenseless against the savagery of others.
On many issues, empathy can pull us in the wrong direction. The outrage that comes from adopting the perspective of a victim can drive an appetite for retribution....In one study, conducted by Jonathan Baron and Ilana Ritov, people were asked how best to punish a company for producing a vaccine that caused the death of a child. Some were told that a higher fine would make the company work harder to manufacture a safer product; others were told that a higher fine would discourage the company from making the vaccine, and since there were no acceptable alternatives on the market the punishment would lead to more deaths. Most people didn’t care; they wanted the company fined heavily, whatever the consequence.
There’s a larger pattern here. Sensible policies often have benefits that are merely statistical but victims who have names and stories. Consider global warming—what Rifkin calls the “escalating entropy bill that now threatens catastrophic climate change and our very existence.” As it happens, the limits of empathy are especially stark here. Opponents of restrictions on CO2 emissions are flush with identifiable victims—all those who will be harmed by increased costs, by business closures. The millions of people who at some unspecified future date will suffer the consequences of our current inaction are, by contrast, pale statistical abstractions.
Moral judgment entails more than putting oneself in another’s shoes. “The decline of violence may owe something to an expansion of empathy,” the psychologist Steven Pinker has written, “but it also owes much to harder-boiled faculties like prudence, reason, fairness, self-control, norms and taboos, and conceptions of human rights.” A reasoned, even counter-empathetic analysis of moral obligation and likely consequences is a better guide to planning for the future than the gut wrench of empathy.
Newtown, in the wake of the Sandy Hook massacre, was inundated with so much charity that it became a burden. More than eight hundred volunteers were recruited to deal with the gifts that were sent to the city—all of which kept arriving despite earnest pleas from Newtown officials that charity be directed elsewhere....Meanwhile—just to begin a very long list—almost twenty million American children go to bed hungry each night, and the federal food-stamp program is facing budget cuts of almost twenty per cent.
Such are the paradoxes of empathy. The power of this faculty has something to do with its ability to bring our moral concern into a laser pointer of focussed attention. If a planet of billions is to survive, however, we’ll need to take into consideration the welfare of people not yet harmed—and, even more, of people not yet born. They have no names, faces, or stories to grip our conscience or stir our fellow-feeling. Their prospects call, rather, for deliberation and calculation. Our hearts will always go out to the baby in the well; it’s a measure of our humanity. But empathy will have to yield to reason if humanity is to have a future.

Tuesday, May 21, 2013

Transferring from Google Reader to Feedly

I've just finished editing and culling the "Other Mind Blogs" list in the right column of this blog.  If you are now getting the feeds of any of these or Deric's MindBlog from Google Reader, which shuts down on July 1,  they can all be automatically transferred to the Feedly reader at Feedly.com.   The search box at the upper right corner of the Feedly page lets you enter URLS of further blogs or news sources you wish to follow. (For a more thorough listing of options, see my March 26 post.)

Monday, May 20, 2013

On well-being - An orgy of good energy last week in Madison, Wisconsin.

In spite of slightly flippant title for this post, I really do believe this is good stuff. The Dali Lama paid a two day visit to Madison Wisconsin last week, as part of his current world tour “Change Your Mind, Change The World,” speaking at a number of different venues (all under high security screening) under the sponsorship of the Center for Investigating Healthy Minds and the Global Health Institute, both at the University of Wisconsin-Madison. My colleague Richard Davidson, who was central in arranging his visit, is doing an amazing job of bringing to the general public neuroscientific and psychological insight into well-being and happiness. (side note: Davidson contributed to a brain imaging seminar I organized for the graduate Neuroscience program in the 1980’s.) An example his public outreach is this recent piece in The Huffington Post.

The point that I find most compelling, and it certainly resonates with my own experience, is the hard evidence that kindness and generosity are innate human predispositions whose exercise is more effective in promoting a sense of well being than explicitly self-serving behaviors. (Of course, this message has been a component of the major religious traditions for thousands of years.) There is accumulating evidence that kind and generous behavior reduces inflammatory chemistry in our bodies.

I have used the tag ‘happiness’ and 'mindfulness' (in the left column) to mark numerous posts on well-being over the past seven years. Right now my queue of potential posts in this area has more items that I will ever get to individually. So...I thought I would just list a few of them for MindBlog readers who might wish to check some of them out:

On happiness, from the New York Times Opinionator column.

A 75-year Harvard Study's finding on what it takes to live a happy life. 

A brief New York Times piece on mindfulness.

How your mind wandering robs you of happiness. (also, enter ‘mind wandering’ in the blog’s search box)

Is giving the secret to getting ahead.

On money and well being.


Saturday, May 18, 2013

On continuing MindBlog - Drawing personal structure from sampling the digital stream.

The responses in comments and emails to my ‘scratching my head about mindblog’ post are telling me that my small contributions are valued, with some making it part of the ritual that structures their lives. So, I guess I should listen to that rather than fretting about adding to the digital stream that threatens to overwhelm us all.
We all want to understand how our show is run, what is going on with the little grey cells between our ears (and of course, we would like it run it better). We want to ‘see’ in addition to just ‘being.’ Indeed, this distinction is one of the most central ones I have been making through the course of over three thousand posts. It can be recast in numerous guises, such as being a moral agent in addition a moral patient or between third and first person self construals.
I feel like the recent disjunctive break in generating Deric’s MindBlog - occasioned by a two week return to my former world of vision research - has been a useful one for me. (I will mention, by the way, that I was gratified a the recent vision meeting I attended when several doctoral and postdoctoral students told me that they look back on their time in my laboratory as one of the best in their lives - a time when they were given structure and support, and also given freedom to grow the beginnings of their future independent professional selves.)
I’ve kept a journal since 1974, when I was into gestalt therapy, transactional analysis, and trips to Esalen to learn massage, attend workshops, and commune with the Monarch butterflies and whales of the Big Sur. That journal started to mark entries on psychology and mind with a tag (*mind), that I could search for. My reading on mind and brain grew out of the cellular neurobiology course I started with Julius Adler and then Tony Stretton in 1970, and it formed a parallel track alongside my vision research laboratory work that finally resulted in a new course, The Biology of Mind, in 1994, and the book “Biology of Mind” of 1999 that grew out of its lecture notes. A number of lectures and web essays in the early 2000’s led to the startup of this MindBlog in February of 2006. Thinking about this stuff is how I have structured my life for over 40 years, and I realize that giving that up would be the same as giving up my self.
So..... I guess MindBlog, in some form, isn’t going away.

Wednesday, May 15, 2013

Deric’s MindBlog spends time in the past...in the future?

The past:  I’ve been spending the past two weeks in a former life. I was in Seattle last week to attend the annual meeting of ARVO (Assoc. for Research in Vision and Opthalmology), at which my last postdoc, Vadim Arshavsky, was awarded the Proctor Prize.   The graphic in this post is from a lecture I just gave on Tuesday to the final seminar this term of the McPherson Eye Research Institute here at U.W., describing the contributions of my laboratory (from 1968 to 1998) to understanding how light changes into a nerve signal in our eyes. (The talk is posted here.)

The future:  I’m scratching my head about how (maybe whether?) to continue MindBlog.  It has had a good run since Feb. of 2006, and I'm kind of wondering if I should withdraw - as I did from the vision field - while I’m ahead, or at least cut back to less frequent, more thoughtful, posts…. I’m a bit dissatisfied that many of the posts are essentially expanded tweets, passing on the link and abstract of an article I find interesting.  I think this is lazy, but I do get ‘thank you’ emails for pointing out something that reader X is interested in.  A downside is that the time I take scanning journals and chaining myself to the daily post regime makes it difficult for me to settle into deeper development of a few topics.  It also competes with the increasing amount of time I am spending on classical piano performance. I will be curious to see whether these rambling comments elicit any responses from the current 2,500 subscribers to MindBlog’s RSS feed or ~1,100 twitter followers.     

Monday, May 06, 2013

MindBlog in Seattle this week - hiatus in posts

There will be a hiatus in MindBlog posts for awhile.
I'm spending this week at an ARVO (Association for Research in Vision and Opthalmology) meeting where a protege of mine, Vadim Arshavsky, who I brought to my lab from the former USSR for post-doctoral training, is being given the field's Proctor Award for work done (mainly after leaving my laboratory) on understanding how the nerve signal initiated by a flash of light in our eyes is rapidly turned off.

Friday, May 03, 2013

Riding other people's coattails.

Another interesting bit from Psychological Science:
Two laboratory experiments and one dyadic study of ongoing relationships of romantic partners examined how temporary and chronic deficits in self-control affect individuals’ evaluations of other people. We suggest that when individuals lack self-control resources, they value such resources in other people. Our results support this hypothesis: We found that individuals low (but not high) in self-control use information about other people’s self-control abilities when judging them, evaluating other people with high self-control more positively than those with low self-control. In one study, participants whose self-control was depleted preferred people with higher self-control, whereas nondepleted participants did not show this preference. In a second study, we conceptually replicated this effect while using a behavioral measure of trait self-control. Finally, in a third study we found individuals with low (but not high) self-control reported greater dependence on dating partners with high self-control than on those with low self-control. We theorize that individuals with low self-control may use interpersonal relationships to compensate for their lack of personal self-control resources.

Thursday, May 02, 2013

Willpower and Abundance - The case for less.

I wanted to pass on some clips from Tim Wu's sane commentary in The New Republic on the recent Diamandis and Kotler book "Abundance: The Future Is Better Than You Think.":
“The future is better than you think” is the message of Peter Diamandis’s and Steven Kotler’s book. Despite a flat economy and intractable environmental problems, Diamandis and his journalist co-author are deeply optimistic about humanity’s prospects. “Technology,” they say, “has the potential to significantly raise the basic standards of living for every man, woman, and child on the planet.... Abundance for all is actually within our grasp.”
Optimism is a useful motivational tool, and I see no reason to argue with Diamandis about the benefits of maintaining a sunny disposition...The unhappy irony is that Diamandis prescribes a program of “more” exactly at a point when a century of similar projects have begun to turn on us. To be fair, his ideas are most pertinent to the poorer parts of the world, where many suffer terribly from a lack of the basics. But in the rich and semi-rich parts of the world, it is a different story. There we are starting to see just what happens when we reach surplus levels across many categories of human desire, and it isn’t pretty. The unfortunate fact is that extreme abundance—like extreme scarcity, but in different ways—can make humans miserable. Where the abundance project has been truly successful, it has created a new host of problems that are now hitting humanity…
The worldwide obesity epidemic is our most obvious example of this “flip” from problems of scarcity to problems of surplus…There is no single cause for obesity, but the sine qua non for it is plenty of cheap, high-calorie foods. And such foods, of course, are the byproduct of our marvelous technologies of abundance, many of them celebrated in Diamandis’s book. They are the byproducts of the “Green Revolution,” brilliant techniques in industrial farming and the genetic modification of crops. We have achieved abundance in food, and it is killing us.
Consider another problem with no precise historical equivalent: “information overload.”…phrases such as “Internet addiction” describe people who are literally unable to stop consuming information even though it is destroying their lives…many of us suffer from milder versions of information overload. Nicolas Carr, in The Shallows, made a persuasive case that the excessive availability of information has begun to re-program our brains, creating serious issues for memory and attention span. Where people were once bored, we now face too many entertainment choices, creating a strange misery aptly termed “the paradox of choice” by the psychologist Barry Schwartz. We have achieved the information abundance that our ancestors craved, and it is driving us insane.
This very idea that too much of what we want can be a bad thing is hard to accept…But in today’s richer world, if you are overweight, in debt, and overwhelmed, there is no one to blame but yourself. Go on a diet, stop watching cable, and pay off your credit card—that’s the answer. In short, we think of scarcity problems as real, and surplus problems as matters of self-control…That may account for the current popularity of books designed to help readers control themselves. The most interesting among them is Willpower: Rediscovering the Greatest Human Strength, by Roy Baumeister and John Tierney.
The book’s most profound sections describe a phenomenon that they call “ego depletion,” a state of mental exhaustion where bad decisions are made. It turns out that being forced to make constant decisions is what causes ego depletion. So if willpower is a muscle, making too many decisions in one day is the equivalent of blowing out your hamstrings with too many squats…they recommend avoiding situations that cause ego-depletion altogether. And here is where we find the link between Abundance and Willpower…Over the last century, mainly through the abundance project, we have created a world where avoiding constant decisions is nearly impossible. We have created environments that are designed to destroy our powers of self-control by creating constant choices among abundant options. [We have] a negative feedback loop: we have more than ever, and therefore need more self-control than ever, but the abundance we’ve created destroys our ability to resist. It is a setup that Sisyphus might have actually envied. 
…it is increasingly the duty of the technology industry and the technologists to take seriously the challenge of human overload, and to give it as much attention as the abundance project. It is the first great challenge for post-scarcity thinkers…So advanced are our technological powers that we will be increasingly trying to create access to abundance and to limit it at the same time. Sometimes we must create both the thesis and the antithesis to go in the right direction. We have spent the last century creating an abundance that exceeds any human scale, and now technologists must turn their powers to controlling our, or their, creation.  

Wednesday, May 01, 2013

Overearning

Here is an interesting study from Hsee et al on our tendency to keeping working to earn more than we need for happiness, at the expense of that happiness.
Their abstract:
High productivity and high earning rates brought about by modern technologies make it possible for people to work less and enjoy more, yet many continue to work assiduously to earn more. Do people overearn—forgo leisure to work and earn beyond their needs? This question is understudied, partly because in real life, determining the right amount of earning and defining overearning are difficult. In this research, we introduced a minimalistic paradigm that allows researchers to study overearning in a controlled laboratory setting. Using this paradigm, we found that individuals do overearn, even at the cost of happiness, and that overearning is a result of mindless accumulation—a tendency to work and earn until feeling tired rather than until having enough. Supporting the mindless-accumulation notion, our results show, first, that individuals work about the same amount regardless of earning rates and hence are more likely to overearn when earning rates are high than when they are low, and second, that prompting individuals to consider the consequences of their earnings or denying them excessive earnings can disrupt mindless accumulation and enhance happiness.
And, their description of the paradigm used:
Participants are tested individually while seated at a table in front of a computer and wearing a headset. The procedure consists of two consecutive phases, each lasting 5 min. In Phase I, the participant can relax and listen to music (mimicking leisure) or press a key to disrupt the music and listen to a noise (mimicking work). For every certain number of times the participant listens to the noise (e.g., 20 times), he or she earns 1 chocolate; the computer keeps track and shows how many chocolates the participant has earned. The participant can only earn (not eat) the chocolates in Phase I and can only eat (and not earn more of) the chocolates in Phase II. The participant does not need to eat all of the earned chocolates in Phase II, but if any remain, they must be left on the table at the end of the study. Participants learn about these provisions in advance and are informed that they can decide how many chocolates to earn in Phase I and how many to eat in Phase II, and that their only objective is to make themselves as happy as possible during the experiment.
Our paradigm simulates a microcosmic life with a fixed life span; in the first half, one chooses between leisure and labor (earning), and in the second half, one consumes one’s earnings and may not bequeath them to others. In designing the paradigm, our priority was minimalism and controllability rather than realism and external validity. The paradigm was inspired by social scientists’ approaches to investigating complex real-world issues, such as unselfish motives, using minimalistic simulations, such as the ultimatum game. These simulations involve contrived features—for example, players cannot learn each other’s identities and need not worry about reputations—but such features are important because they allow researchers to control for normative reasons for unselfish behaviors and test for pure, unselfish motives. Likewise, our paradigm also involves contrived features—for example, rewards are chocolates rather than money, and participants cannot take their rewards from the lab—but these features are crucial for us to control for normative reasons for overearning effects and test for pure overearning tendencies.

Tuesday, April 30, 2013

The slippery slope of fear

LeDoux makes some useful comments on confusion one encounters in studies of fear, especially involving the amygdala, a clip:
‘Fear’ is used scientifically in two ways, which causes confusion: it refers to conscious feelings and to behavioral and physiological responses...As long as the term ‘fear’ is used interchangeably to describe both feelings and brain/bodily responses elicited by threats, confusion will continue. Restricting the scientific use of the term ‘fear’ to its common meaning and using the less-loaded term, ‘threat-elicited defense responses’, for the brain/body responses yields a language that more accurately reflects the way the brain evolved and works, and allows the exploration of processes in animal brains that are relevant to human behavior and psychiatric disorders without assuming that the complex constellation of states that humans refer to by the term fear are also consciously experienced by other animals. This is not a denial of animal consciousness, but a call for researchers not to invoke animal consciousness to explain things that do not involve consciousness in humans.

Monday, April 29, 2013

Lessons learned from a Chaos and Comlexity seminar.

For ~ 15 years I have participated in the weekly Chaos and Complexity seminar at the Univ. of Wisconsin organized by physics professor Clint Sprott, and have given ~5 lectures to the group during that period.  I want to pass on this link to Sprott's summary comments  presented at the final meeting of the spring term on 5/7/2013. Here I would like to pass on his closing comments:
We have heard many speakers over the years make dire predictions, especially regarding the climate and the ecology, but I am more optimistic than most about our future for five fundamental reasons: 1) Negative feedback is at least as common as positive feedback, and it tends to regulate many processes. 2) Most nonlinearities are beneficial, putting inherent limits on the growth of deleterious effects. 3) Complex dynamical systems self-organize to optimize their fitness. 4) Chaotic systems are sensitive to small changes, making prediction difficult, but facilitating control. 5) Our knowledge and technology will continue to advance, meaning that new solutions to problems will be developed as they are needed or, more likely, soon thereafter in response to the need. Whether it's fusion reactors, geoengineering, vastly improved batteries, halting of the aging process, DNA cloning to restore extinct species, or some other game changer, things may get worse before they get better, but humans are enormously ingenious and adaptable and will rise to the challenge of averting disaster.
This is not a prediction that our problems will vanish or an argument for ignoring them. On the contrary, our choices and actions are the means by which society will reorganize to become even better in the decades to follow, albeit surely not a Utopia.

Friday, April 26, 2013

Teleological reasoning about nature: intentional design or relational perspectives?

Ojalehto et al. offer an interesting analysis of assumptions about our reasoning about natural phenomena. Some slightly edited clips from the abstract and paper:
According to the theory of ‘promiscuous teleology’, humans are naturally biased to (mistakenly) construe natural kinds as if they (like artifacts) were intentionally designed ‘for a purpose’ (i.e. clouds are 'for' raining). However, this theory introduces two paradoxes. First, if infants readily distinguish natural kinds from artifacts, as evidence suggests, why do school-aged children erroneously conflate this distinction? Second, if Western scientific education is required to overcome promiscuous teleological reasoning, how can one account for the ecological expertise of non-Western educated, indigenous people? We develop an alternative ‘relational-deictic’ interpretation, proposing that the teleological stance may not index a deep-rooted belief that nature was designed for a purpose, but instead may reflect an appreciation of the perspectival relations among living things and their environments.
A new relational-deictic framework can take into account a rich set of relations and perspectives among natural entities, permitting one to avoid cultural assumptions about the ‘right way’ to conceptualize nature, and identifying the claim for ‘intuitive theism’ as a culturally-infused stance. Kelemen writes that teleological reasoning is a ‘side-effect’ of people's natural inclination to ‘privilege intentional explanation’ and view ‘nature as an intentionally designed artifact.’ The relational-deictic framework outlined here offers a different interpretation: teleological reasoning reflects a tendency to think through perspectival relationships within (socio-ecological) webs of interdependency. On this view, the origins of teleological thinking are social and relational rather than individual and intentional. This has implications for ongoing debates about the primacy of social and relational theories in human development.
The relational-deictic interpretation opens new avenues for research into how people come to understand the natural world and their place within it. Teleological reasoning may not be immature or misguided. Instead, it may reflect young children's ecological perspective-taking abilities and serve as an entry-point for reasoning about socio-ecological systems of living things, rather than reasoning about isolated, abstracted, and essentialized individual kinds

Thursday, April 25, 2013

Brain activity correlating with future antisocial activity.

From Aharoni et al.:
Identification of factors that predict recurrent antisocial behavior is integral to the social sciences, criminal justice procedures, and the effective treatment of high-risk individuals. Here we show that error-related brain activity elicited during performance of an inhibitory task prospectively predicted subsequent rearrest among adult offenders within 4 y of release (N = 96). The odds that an offender with relatively low anterior cingulate activity would be rearrested were approximately double that of an offender with high activity in this region, holding constant other observed risk factors. These results suggest a potential neurocognitive biomarker for persistent antisocial behavior.
A marker, fine, but as a guide to action?  Suggesting more post-incarceration therapeutic efforts with those having lower anterior cingulate activities?

Wednesday, April 24, 2013

Body posture modulates action perception.

From Zimmermann et al:
Recent studies have highlighted cognitive and neural similarities between planning and perceiving actions. Given that action planning involves a simulation of potential action plans that depends on the actor's body posture, we reasoned that perceiving actions may also be influenced by one's body posture. Here, we test whether and how this influence occurs by measuring behavioral and cerebral (fMRI) responses in human participants predicting goals of observed actions, while manipulating postural congruency between their own body posture and postures of the observed agents. Behaviorally, predicting action goals is facilitated when the body posture of the observer matches the posture achieved by the observed agent at the end of his action (action's goal posture). Cerebrally, this perceptual postural congruency effect modulates activity in a portion of the left intraparietal sulcus that has previously been shown to be involved in updating neural representations of one's own limb posture during action planning. This intraparietal area showed stronger responses when the goal posture of the observed action did not match the current body posture of the observer. These results add two novel elements to the notion that perceiving actions relies on the same predictive mechanism as planning actions. First, the predictions implemented by this mechanism are based on the current physical configuration of the body. Second, during both action planning and action observation, these predictions pertain to the goal state of the action.

Tuesday, April 23, 2013

Where our brains compute musical reward.

Yet another fascinating chunk from Zatorre and collaborators:
We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.

Monday, April 22, 2013

Quiet - the world of introverts.

I recently visited my year old grandson in Austin, TX., who turns out to be my opposite on Jerome Kagan's scale of temperamental introversion/extraversion. Like his mother, he is outgoing and gregarious, and wears me out very quickly with his intensity in play activities. Against this background I was struck by reading a book review by Judith Warner of Susan Cain's new book "Quiet", listed by the NY Times as one of the 10 major popular science books of the past year. Some clips from the review:
Too often denigrated and frequently overlooked in a society that’s held in thrall to an “Extrovert Ideal — the omnipresent belief that the ideal self is gregarious, alpha and comfortable in the spotlight,” Cain’s introverts are overwhelmed by the social demands thrust upon them. They’re also underwhelmed by the example set by the voluble, socially successful go-getters in their midst who “speak without thinking,” in the words of a Chinese software engineer whom Cain encounters in Cupertino, Calif.
Many of the self-avowed introverts she meets in the course of this book.. ape extroversion...some fake it well enough to make it, going along to get along in a country that rewards the out­going...Unchecked extroversion — a personality trait Cain ties to ebullience, excitability, dominance, risk-taking, thick skin, boldness and a tendency toward quick thinking and thoughtless action — has actually, she argues, come to pose a real menace of late. The outsize reward-seeking tendencies of the hopelessly ­outer-directed helped bring us the bank meltdown of 2008...she claims....it’s time to establish “a greater balance of power” between those who rush to speak and do and those who sit back and think. Introverts — who, according to Cain, can count among their many virtues the fact that “they’re relatively immune to the lures of wealth and fame” — must learn to “embrace the power of quiet.” And extroverts should learn to sit down and shut up.
Her accounts of introverted kids misunderstood and mishandled by their parents should give pause, for she rightly notes that introversion in children (often incorrectly viewed as shyness) is in some ways threatening to the adults around them. Indeed, in an age when kids are increasingly herded into classroom “pods” for group work, Cain’s insights into the stresses of nonstop socializing for some children are welcome; her advice that parents should choose to view their introverted offspring’s social style with understanding rather than fear is well worth hearing.
A...problem with Cain’s argument is her assumption that most introverts are actually suffering in their self-esteem. This may be true in the sorts of environments — Harvard Business School, corporate boardrooms, executive suites — that she knows best and appears to spend most of her time thinking about. Had she spent more time in other sorts of places, among other types of people — in research laboratories, for example, or among economists rather than businessmen and -women — she would undoubtedly have discovered a world of introverts quite contented with who they are, and who feel that the world has been good to them.

Friday, April 19, 2013

Free Will, continued - Prior unconscious brain activity predicts choices for abstract intentions!

I've been running a thread on free will and neuroscience in this blog, recently noting comments by Nahmias:
...As long as people understand that discoveries about how our brains work do not mean that what we think or try to do makes no difference to what happens, then their belief in free will is preserved. What matters to people is that we have the capacities for conscious deliberation and self-control that I’ve suggested we identify with free will.
...None of the evidence marshaled by neuroscientists and psychologists suggests that those neural processes involved in the conscious aspects of such complex, temporally extended decision-making are in fact causal dead ends. It would be almost unbelievable if such evidence turned up.
Almost unbelievable appears to have happened, with this from Soon et al.. Interestingly, they identified a partial spatial and temporal overlap of choice-predictive signals with activity in the default mode network I reviewed in this past Monday's post. The abstract:
Unconscious neural activity has been repeatedly shown to precede and potentially even influence subsequent free decisions. However, to date, such findings have been mostly restricted to simple motor choices, and despite considerable debate, there is no evidence that the outcome of more complex free decisions can be predicted from prior brain signals. Here, we show that the outcome of a free decision to either add or subtract numbers can already be decoded from neural activity in medial prefrontal and parietal cortex 4 s before the participant reports they are consciously making their choice. These choice-predictive signals co-occurred with the so-called default mode brain activity pattern that was still dominant at the time when the choice-predictive signals occurred. Our results suggest that unconscious preparation of free choices is not restricted to motor preparation. Instead, decisions at multiple scales of abstraction evolve from the dynamics of preceding brain activity.
And, a chunk from their discussion:
It is interesting that mental calculation, the more complex task, had less predictive lead time than a simple binary motor choice in our previous study. This could tentatively reflect a general limitation of unconscious processing in the sense that unconscious processes might be restricted in their ability to develop or stabilize complex representations such as abstract intentions. It is also worth noting that both studies showed the same dissociation between cortical regions that were predictive of the content versus the timing of the decision. This implies that the formation of an intention to act depends on interactions between the choice-predictive and time-predictive regions. The temporal profile of this interaction is likely to determine when the earliest choice-predictive information is available and might differ between tasks.
There was a partial spatial overlap between the choice-predictive brain regions and the DMN. Interestingly, the state of the DMN (default mode network) during the early preparatory phase still resembled that during off-task or “resting” periods. This lends further credit to the notion that the preparatory signals were not a result of conscious engagement with the task. Furthermore, the spatial and temporal overlap hints at a potential involvement of the DMN in unconscious choice preparation.
To summarize, we directly investigated the formation of spontaneous abstract intentions and showed that the brain may already start preparing for a voluntary action up to a few seconds before the decision enters into conscious awareness. Importantly, these results cannot be explained by motor preparation or general attentional mechanisms. We found that frontopolar and precuneus/posterior cingulate encoded the content of the upcoming decision, but not the timing. In contrast, the pre-SMA predicted the timing of the decision, but not the content.

Thursday, April 18, 2013

Showing where moral intent is determined in our brains.

Interesting work from Koster-Hale et al:
Intentional harms are typically judged to be morally worse than accidental harms. Distinguishing between intentional harms and accidents depends on the capacity for mental state reasoning (i.e., reasoning about beliefs and intentions), which is supported by a group of brain regions including the right temporo-parietal junction (RTPJ). Prior research has found that interfering with activity in RTPJ can impair mental state reasoning for moral judgment and that high-functioning individuals with autism spectrum disorders make moral judgments based less on intent information than neurotypical participants. Three experiments, using multivoxel pattern analysis, find that (i) in neurotypical adults, the RTPJ shows reliable and distinct spatial patterns of responses across voxels for intentional vs. accidental harms, and (ii) individual differences in this neural pattern predict differences in participants’ moral judgments. These effects are specific to RTPJ. By contrast, (iii) this distinction was absent in adults with autism spectrum disorders. We conclude that multivoxel pattern analysis can detect features of mental state representations (e.g., intent), and that the corresponding neural patterns are behaviorally and clinically relevant.

Wednesday, April 17, 2013

Brain training games don't actually make you smarter.

Wow...after having done several posts uncritically passing on studies by Jaeggi and others claiming that games to improve working memory, such as the n-Back game, increase cognitive skills in other areas, a number of studies have failed to replicate these phenomena. Gareth Cook has done an interesting article on this in The New Yorker, suggesting that claims made by commercial software sites like Cogmen, Lumosity, and CogniFit are bogus.
Over the last year, however, the idea that working-memory training has broad benefits has crumbled. One group of psychologists, lead by a team at Georgia Tech, set out to replicate the Jaeggi findings, but with more careful controls and seventeen different cognitive-skills tests. Their subjects showed no evidence whatsoever for improvement in intelligence. They also identified a pattern of methodological problems with experiments showing positive results, like poor controls and a reliance on a single measure of cognitive improvement. This failed replication was recently published in one of psychology’s top journals, and another, by a group at Case Western Reserve University, has been published since.
The recent meta-analysis, led by Monica Melby-LervÃ¥g, of the University of Oslo, and also published in a top journal, is even more damning. Some studies are more convincing than others, because they include more subjects and show a larger effect. Melby-LervÃ¥g’s paper laboriously accounts for this, incorporating what Jaeggi, Klingberg, and everyone else had reported. The meta-analysis found that the training isn’t doing anyone much good. If anything, the scientific literature tends to overstate effects, because teams that find nothing tend not to publish their papers. (This is known as the “filedrawer” effect.) A null result from meta-analysis, published in a top journal, sends a shudder through the spine of all but the truest of believers. In the meantime, a separate paper by some of the Georgia Tech scientists looked specifically at Cogmed’s training, which has been subjected to more scientific scrutiny than any other program. “The claims made by Cogmed,” they wrote, “are largely unsubstantiated.”

Tuesday, April 16, 2013

Older brains - just as much nerve firing, but scrambled connections?

A group of colleagues at Imperial College London and Tsinghua University in Beijing have fitted glass windows on the skulls of old and young mice. Contrary to expectation they observe that older mice have more firing points than younger ones, but they are more erratic in their activity than in younger mice, with high turnover rates and wavering firing strengths. The older mice also performed less well on a memory test. The suggestion then is that the mental decline seen in aging may be due more do disorderly wiring than to loss of nerve cells. Their abstract:
Aging is a major risk factor for many neurological diseases and is associated with mild cognitive decline. Previous studies suggest that aging is accompanied by reduced synapse number and synaptic plasticity in specific brain regions. However, most studies, to date, used either postmortem or ex vivo preparations and lacked key in vivo evidence. Thus, whether neuronal arbors and synaptic structures remain dynamic in the intact aged brain and whether specific synaptic deficits arise during aging remains unknown. Here we used in vivo two-photon imaging and a unique analysis method to rigorously measure and track the size and location of axonal boutons in aged mice. Unexpectedly, the aged cortex shows circuit-specific increased rates of axonal bouton formation, elimination, and destabilization. Compared with the young adult brain, large (i.e., strong) boutons show 10-fold higher rates of destabilization and 20-fold higher turnover in the aged cortex. Size fluctuations of persistent boutons, believed to encode long-term memories, also are larger in the aged brain, whereas bouton size and density are not affected. Our data uncover a striking and unexpected increase in axonal bouton dynamics in the aged cortex. The increased turnover and destabilization rates of large boutons indicate that learning and memory deficits in the aged brain arise not through an inability to form new synapses but rather through decreased synaptic tenacity. Overall our study suggests that increased synaptic structural dynamics in specific cortical circuits may be a mechanism for agerelated cognitive decline.

Monday, April 15, 2013

A review - Mindfulness meditation and our brain's default versus attentional networks.

I've been doing some homework on potential topics to work up into a lecture, one of them being brain correlates of various meditative, attentional, or default mode states. The vocabulary used is sometimes contradictory between papers, but two categories emerge. One uses terms for thought like default, narrative focus, phenomenal, social reasoning, theory of mind, baseline setting, self referential, introspective, and stimulus independent. The contrasting descriptors are attentional, direct experience, experiential focus, task positive network, physical cause/effect reasoning.

This cooks down roughly to distinguishing between brain networks whose primary activity occurs during internal narrative focus versus those activated during direct attentional experience.

In reviewing previous mindblog posts on the default network I come up with a partial bibliography of reviews and experiments, and thought some readers might find it useful, a list in no particular order, with brief notes:

Reciprocal repression (mutual inhibition) between networks - nice graphics  - some muddying of definitions

Relationship of this mutual inhibition to mindfulness meditation , which notes Farb et al., 2007

Review (NYTimes) on power of concentration - mindfulness training causing increased connectivity in attentional and default networks. 

Review with graphics of MRI of default network activated by autobiographical memory, envisioning future, theory of mind, moral decision making. 

Tierney - virtues of a wandering mind.  (context, larger agenda, creativity)

Review of varieties of resting state activity

Change between operating systems during eyeblink.

Different components of default mode active in different kinds of memory.

Mental time travel and default network.

Synchronization of both modes between individuals.

Default network can be realized by multiple architectures (split brain patients).

Default network as underpinning of cerebral ‘connectome‘  - good graphic.

Development of human default network from being sparsely functionally connected at 7-9  years.







Default mode in Chimps and Monkeys

Association of default network with midline structures.

Friday, April 12, 2013

Why old folks more easy lose their way,

Wiener et al. make observations that shed light on why older people have more difficulty finding their car in a shopping mall's large parking lot if they exit the mall by a different door than they used to enter it (or follow directions involving an intersection if they approach the intersection from a different direction than the one used for learning them.) From their introduction:
Everyday navigation can be based on different strategies. The hippocampus plays a key role in cognitive map or place strategies that rely on allocentric processing, whereas the parietal cortex and striatal circuits are involved in route or response strategies...To test the hypothesis that cognitive aging not only results in a shift away from allocentric strategies but in a specific preference for beacon-based strategies, we developed a novel experimental paradigm: participants first learned a route along a number of intersections and were then asked to rejoin the original route approaching the intersections from different directions. Trials in which participants approached the intersections from a direction different from that during training (see Fig. 1) allowed us (1) to compare the use and adoption of route-learning strategies between young and older participants and (2) to test for specific preferences for beacon-based strategies in older participants.
The abstract:
Efficient spatial navigation requires not only accurate spatial knowledge but also the selection of appropriate strategies. Using a novel paradigm that allowed us to distinguish between beacon, associative cue, and place strategies, we investigated the effects of cognitive aging on the selection and adoption of navigation strategies in humans. Participants were required to rejoin a previously learned route encountered from an unfamiliar direction. Successful performance required the use of an allocentric place strategy, which was increasingly observed in young participants over six experimental sessions. In contrast, older participants, who were able to recall the route when approaching intersections from the same direction as during encoding, failed to use the correct place strategy when approaching intersections from novel directions. Instead, they continuously used a beacon strategy and showed no evidence of changing their behavior across the six sessions. Given that this bias was already apparent in the first experimental session, the inability to adopt the correct place strategy is not related to an inability to switch from a firmly established response strategy to an allocentric place strategy. Rather, and in line with previous research, age-related deficits in allocentric processing result in shifts in preferred navigation strategies and an overall bias for response strategies. The specific preference for a beacon strategy is discussed in the context of a possible dissociation between beacon-based and associative-cue-based response learning in the striatum, with the latter being more sensitive to age-related changes.

Thursday, April 11, 2013

Defining when a visual stimulus become conscious to us.

Llinás and collaborators do a nice dissection of our conscious versus unconscious visual processing and note the timing (240 milliseconds) of a brain signal that correlates with our conscious awareness of a stimulus. (This is the same time epoch that I evoke in the "millisecond manager" term I use in several of my essays in the left column of this blog - a period during which we can note the onset of a visual or emotional perception before further action or interpretation begins.)
At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated.

Wednesday, April 10, 2013

Your smartphone, your social brain, and your vagus nerve.

I am always struck, when I go a local Starbucks for my noon coffee or do a happy hour at a local bar, that the great majority of those present are staring intently at their smartphones or tablets, while sitting in an environment meant to encourage interaction. As we spend less and less time engaging in face to face positive social contact in public places, what are we losing? The increasing aversion to human contact exhibited by people addicted to staring at their small screens suggests that our social brain follows the same rule as the rest of our brain and body: "Use it or loose it." I've done a post pointing to how modern hi-tech dialog devices fail to engage the evolved brain and body synchronization that accompanies face-to-face dialog.

In a recent NYTimes piece, Barabara Fredricksen describes some of her recent work on countering the toxic effects of isolation from direction person-to-person contact. Some clips, to which I have added a few reference links:
My research team and I conducted a longitudinal field experiment on the effects of learning skills for cultivating warmer interpersonal connections in daily life. Half the participants, chosen at random, attended a six-week workshop on an ancient mind-training practice known as metta, or “lovingkindness,” that teaches participants to develop more warmth and tenderness toward themselves and others....We discovered that the meditators not only felt more upbeat and socially connected; but they also altered a key part of their cardiovascular system called vagal tone. Scientists used to think vagal tone was largely stable, like your height in adulthood. Our data show that this part of you is plastic, too, and altered by your social habits.
To appreciate why this matters, here’s a quick anatomy lesson. Your brain is tied to your heart by your vagus nerve. Subtle variations in your heart rate reveal the strength of this brain-heart connection, and as such, heart-rate variability provides an index of your vagal tone. By and large, the higher your vagal tone the better. It means your body is better able to regulate the internal systems that keep you healthy, like your cardiovascular, glucose and immune responses.
Beyond these health effects, the behavioral neuroscientist Stephen Porges has shown that vagal tone is central to things like facial expressivity and the ability to tune in to the frequency of the human voice. By increasing people’s vagal tone, we increase their capacity for connection, friendship and empathy.
In short, the more attuned to others you become, the healthier you become, and vice versa. This mutual influence also explains how a lack of positive social contact diminishes people. Your heart’s capacity for friendship also obeys the biological law of “use it or lose it.” If you don’t regularly exercise your ability to connect face to face, you’ll eventually find yourself lacking some of the basic biological capacity to do so.

Tuesday, April 09, 2013

Mindfulness training improves working memory and cognitive performance while reducing mind wandering.

Yet another study, by Mrazek et al., on the salutary effects of mindfulness:
Given that the ability to attend to a task without distraction underlies performance in a wide variety of contexts, training one’s ability to stay on task should result in a similarly broad enhancement of performance. In a randomized controlled investigation, we examined whether a 2-week mindfulness-training course would decrease mind wandering and improve cognitive performance. Mindfulness training improved both GRE reading-comprehension scores and working memory capacity while simultaneously reducing the occurrence of distracting thoughts during completion of the GRE and the measure of working memory. Improvements in performance following mindfulness training were mediated by reduced mind wandering among participants who were prone to distraction at pretesting. Our results suggest that cultivating mindfulness is an effective and efficient technique for improving cognitive function, with wide-reaching consequences.
(The GRE is the Graduate Record Examination meant to test cognitive capacity of graduate school applicants. Readers interested in the details of the experiment, performed on the usual batch of ~50 college undergraduates, can email me.)

Monday, April 08, 2013

Alteration of paralimbic self awareness circuits in behavioral addiction.

Changeux and collaborators look at the brain correlates of pathological gambling, evaluating whether addictions might occur because of a predisposition linked to abnormal functioning of a frontal circuitry associated with self awareness, preceding any use of drugs. Some clips:
The introduction of magnetoencephalography (MEG) has made it possible to study neural mechanisms even in deeper parts of the cortex with a high degree of temporal resolution in combination with a decent spatial resolution. This allows investigation of one of the major networks of the brain, the paralimbic interaction between the medial prefrontal/anterior cingulate (ACC) and medial parietal/posterior cingulate (PCC) cortices. This interaction has in several recent studies been associated with self-awareness.

Schematic representation of the medial cortical components of the paralimbic network of self-awareness. Schematic localization of the medial sources for MEG registration. Red, ACC; Blue, PCC.




They compared 14 pathological gamblers and 11 age- and sex-matched controls using a stop-signal task consisting of “go” and “nogo” trials. In go trials, the participant is instructed to press a button as soon as an “O” appears on the screen. In nogo trials, the O is followed by an “X,” and the participant is instructed to withhold his response. The task can be used to measure a number of variables associated with impulsivity such as the stop-signal reaction time (SSRT), which is the time required for the stop signal to be processed so a response can be withheld. In particular, the SSRT has been widely used as a valid measure of impulsivity in general, and in studies of patients suffering from addiction.
The main finding of the present study was that behavioral addiction is linked to abnormal activity in, and communication between, nodal regions of the paralimbic network of self-awareness, the ACC and PCC, which are effective in different aspects of self-awareness processing. Pathological gamblers had lower synchronization between the ACC and PCC at rest in the high gamma band compared with controls, and failed to show an increase in gamma synchronization during rest compared with the task (as observed in controls). These findings could not be attributed to previous drug abuse or smoking habits. Furthermore, pathological gamblers without previous drug abuse had lower PCC power than controls and gamblers with previous stimulant abuse during the stop-signal task. In contrast, a history of stimulant abuse in gamblers caused a marked increase in power across regions and frequencies both at rest and during the stop-signal task.

Friday, April 05, 2013

Training our emotional brain - improving affective control.

Schweizer et al. suggest that our ability to keep a cool head in emotionally charged situations can be enhanced by working memory training, because both functions are associated with the same frontoparietal neural circuitry, including the dorsolateral prefrontal cortex (PFC), the inferior parietal and the anterior cingulate cortices, that can exert downregulatory effects on experienced emotional distress through projections to the amygdala and midbrain nuclei from the lateral and medial PFC components. Here is their abstract, followed by a description of the emotional working memory (not regular working memory) training that was evaluated.
Affective cognitive control capacity (e.g., the ability to regulate emotions or manipulate emotional material in the service of task goals) is associated with professional and interpersonal success. Impoverished affective control, by contrast, characterizes many neuropsychiatric disorders. Insights from neuroscience indicate that affective cognitive control relies on the same frontoparietal neural circuitry as working memory (WM) tasks, which suggests that systematic WM training, performed in an emotional context, has the potential to augment affective control. Here we show, using behavioral and fMRI measures, that 20 d of training on a novel emotional WM protocol successfully enhanced the efficiency of this frontoparietal demand network. Critically, compared with placebo training, emotional WM training also accrued transfer benefits to a “gold standard” measure of affective cognitive control–emotion regulation. These emotion regulation gains were associated with greater activity in the targeted frontoparietal demand network along with other brain regions implicated in affective control, notably the subgenual anterior cingulate cortex. The results have important implications for the utility of WM training in clinical, prevention, and occupational settings.
A description of the training:
The emotional working memory training... comprised an affective dual n-back task consisting of a series of trials each of which simultaneously presented a face (for 500 ms) on a 4 × 4 grid on a monitor and a word (for 500–950 ms) over headphones. Each picture-word pair was followed by a 2500 ms interval during which participants responded via button press if either/both stimuli from the pair matched the corresponding stimuli presented n positions back; 60% of the words (e.g., evil, rape) and faces (fearful, angry, sad, or disgusted expressions) were emotionally negative with the others affectively neutral in tone. Trial presentation order was randomized across training sessions.

Thursday, April 04, 2013

Impersonating your younger self makes your body physiologically younger - a rediscovered post.

For several years I've been trying to find or recall a MindBlog post or an article read, and couldn't come up with it. A blog reader sent an email recalling it, and I couldn't find it. FINALLY, on doing a string search in this blog (for 'mindfulness') I found it, an August 2010 post that I had given the misleading title of "The Psychology of Possibility." It referenced an article in Harvard Magazine on the work of Ellen Langer (1,2,3). Some of her early work is fascinating, and the post is worth repeating here:

An interesting article in the Harvard Magazine describes the life work of Ellen Langer, her demonstrations that our social self image (old versus young, for example) strongly patterns our actual vitality and physiology, her work on Mindfulness, unconscious processing, etc. I recommend that you read the article. Here are some clips from its beginning that hooked me (I actually did my own mini-repeat of the experiment described, a simple self-experiment of pretending that I had been transported back in time to 40 years ago, and convinced myself I was experiencing some of the effects described)...
In 1981, early in her career at Harvard, Ellen Langer and her colleagues piled two groups of men in their seventies and eighties into vans, drove them two hours north to a sprawling old monastery in New Hampshire, and dropped them off 22 years earlier, in 1959. The group who went first stayed for one week and were asked to pretend they were young men, once again living in the 1950s. The second group, who arrived the week afterward, were told to stay in the present and simply reminisce about that era. Both groups were surrounded by mid-century mementos—1950s issues of Life magazine and the Saturday Evening Post, a black-and-white television, a vintage radio—and they discussed the events of the time: the launch of the first U.S. satellite, Castro’s victory ride into Havana, Nikita Khrushchev and the need for bomb shelters.

...Before and after the experiment, both groups of men took a battery of cognitive and physical tests, and after just one week, there were dramatic positive changes across the board. Both groups were stronger and more flexible. Height, weight, gait, posture, hearing, vision—even their performance on intelligence tests had improved. Their joints were more flexible, their shoulders wider, their fingers not only more agile, but longer and less gnarled by arthritis. But the men who had acted as if they were actually back in 1959 showed significantly more improvement. Those who had impersonated younger men seemed to have bodies that actually were younger.

Wednesday, April 03, 2013

Do we need an Apollo moon project for the brain?

I have collected a sampling of the many commentaries on the Brain Activity Map project to which Barack Obama alluded in his State of the Union address which is becoming a high profile 3-billion dollar endeavor. For a few examples, see the NYTimes, The Altantic, and Plos Blogs. I finally want to put in my two cents worth to say that such an effort is completely misguided. But first, clips from a Science article contributed by 'big science' luminaries who would profit from such a project.
...the mechanisms of perception, cognition, and action remain mysterious because they emerge from the real-time interactions of large sets of neurons in densely interconnected, widespread neural circuits. It is time for a large-scale effort in neuroscience to create and apply a new generation of tools to enable the functional mapping and control of neural activity in brains with cellular and millisecond resolution...This initiative, the Brain Activity Map (BAM), could put neuroscientists in a position to understand how the brain produces perception, action, memories, thoughts, and consciousness
The last phrase, in particular, is borderline delusional. As John Horgan points out, we don't even see the side of the barn yet. Apart from the fact that we don’t know what to include in a simulation and what to leave out, we already have conclusive evidence that a search for a road map of stable neural pathways that can represent brain functions is futile. Edited from Horgan:
...the brain is radically unlike and more complex than any existing computer. A typical brain contains 100 billion cells, and each cell is linked via synapses to as many as 100,000 others. Synapses are awash in neurotransmitters, hormones, modulatory peptide (small proteins), neural-growth factors and other chemicals that affect the transmission of signals, and synapses constantly form and dissolve, weaken and strengthen, in response to new experiences...not only do old brain cells die, new ones can form via neurogenesis...many genes are constantly turning on and off and thereby further altering operations of brain nerve cells...the brain may be processing information at many levels below and above that of individual neurons and synapses...each individual neuron, rather than resembling a transistor, maybe be more like a computer in its own right, engaging in complex information-processing.
I fear that these big, much-hyped initiatives will turn out to be as disappointing as the Decade of the Brain. Rather than boosting the status of neuroscience, they may harm its credibility.
A particularly telling story comes from my long time friend and colleague Tony Stretton at the University of Wisconsin, who studies the very simple nervous system of the parasitic nematode Ascaaris suum, that has only 298 neurons, for which a functional circuit from the morphological synapses, scored by electron microscopy, has been obtained. This is just the sort of information the Brain Activity Map project is trying to obtain for our brains. So, do we know how the Ascaris nervous system works? No, we're not even close, because Stretton has discovered that their are numerous peptides (as many as 250) that modulate the activity of neurons. Go figure how neurons in that complex modulatory soup work!! And multiply the problem by at least a billion for our brains.

To be fair, the vigorous discussion over the merits of a big push has led, as Markoff and Gorman describe in the NYTimes, to cast the enterprise as trying to better define the playing field, rather than assuming that we now know what it is.  

Tuesday, April 02, 2013

A next generation of antidepressants?

Russo and Charney do a brief write up on recent work of Nasca et al., who find that a common dietary supplement, L-acetylcarnitine, is a potential rapidly acting antidepressant:
Over the past 50 y, there have been few mechanistically distinct drugs for the treatment of major depressive disorders, despite the fact that nearly two-thirds of patients do not achieve full remission of symptoms on currently available antidepressants. In addition, even when adequate remission is achieved, patients require 2–4 wk of treatment before any significant effects, increasing the risk for complications, such as suicide. This delay in effectiveness has resulted in a major push to identify and develop novel therapeutics with more rapid effects. The recent identification of ketamine as a rapid antidepressant effective in treatment-resistant patients has been groundbreaking.
Nasca et al.  describe in PNAS a unique potential rapidly acting antidepressant, l-acetylcarnitine (LAC), which is a dietary supplement that acts by acetylating protein targets to control their function. LAC is reported to be well tolerated and can readily cross the blood-brain barrier. A recent study suggests it has promise in the treatment of Parkinson disease because of its neroprotective properties. Strikingly, LAC exhibits antidepressant efficacy within 2–3 d following intraperitoneal administration in rodents, compared with 2–3 wk with a standard antidepressant treatment, such as chlorimipramine. Although LAC is relatively nonspecific and can target many biological pathways, it is suggested by Nasca et al.  to promote rapid antidepressant responses by acetylation of histone proteins that control the transcription of BDNF and metabotropic glutamate 2 (mGlu2) receptors in the hippocampus (Hipp) and prefrontal cortex (PFC).
One of the more impressive aspects of this article is that Nasca et al. verify rapid antidepressant efficacy of LAC in both a genetic rat model of susceptibility [Flinders Sensitive Line (FSL)] and following chronic stress exposure, factors that are thought to be the primary cause of depression in humans. Although there is clearly far more work necessary to understand the mechanisms of antidepressant action of LAC in rodents, and the dose and relative safety profile for depression treatment in humans, these exciting results are a first step toward that goal.

Monday, April 01, 2013

What are philosophers good for?

Perhaps an appropriate post for April 1st...If you are looking for a good headache, take yourself to Gary Gutting's rehashing of the "can consciousness be explained in physical terms?" debate by dragging out the classic "Mary, the color blind scientist who knows all the physical facts about colors and their perception" and the "philosophical zombie, defined as physically identical to you or me but utterly lacking in internal subjective experience." Gutting solicits comments on his article, and in a subsequent article presents a selection of the responses. From Gutting's final paragraph:
...my conclusion is that neither the Mary nor the Zombie Argument makes a decisive case against physicalism...professional philosophers have uncovered a number of subtle and complex problems for both arguments. For anyone interested in pursuing the discussion further, I would recommend the Stanford Encyclopedia of Philosophy articles “Qualia: The Knowledge Argument” (by Martine Nida-Rümelin) and “Zombies” (by Robert Kirk).
I like Metzinger's stance that consciousness is epistemologically irreducible (see his book "The Ego Tunnel"). There is one reality, one kind of fact, but two kinds of knowledge: first-person knowledge and third-person knowledge, that never can be conflated. There is a long list of ideas on why consciousness evolved, what it is good for, doing goal hierarchies and long-terms plans, enhancement of social coordination, etc. I like Metzinger's description of consciousness as a as a new kind of virtual organ - unlike the permanent hardware of the liver, kidney, or heart it is always present. Virtual organs form for a certain time when needed (like desire, courage, anger, an immune response)..."they are a new computational strategy, that makes classes of facts globally available and allows attending, flexible reacting, within context." "Reality generation" allowed animals to represent explicitly the fact that something is actually the case, the world is present. (conscious color gives information about nutritional value, red berries among green leaves, empathy gives information about the emotional state of conspecifics).

For those of you who like this sort of stuff I point out "A darwinist lynch mob goes after a philosopher" by Leon Wieseltier in the March 11 New Republic, on some outraged reactions to Nagel's new book: "Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False."

Also,"Was Wittgenstein Right? by Paul Horwich:
Wittgenstein claims that there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking.
To which Michael Lynch makes a rejoinder.

 I have to end by repeating another old chestnut:
Philosophy, n. A route of many roads leading from nowhere to nothing. -AMBROSE BIERCE, The Devil's Dictionary

Friday, March 29, 2013

Brain activity associated with the "Cocktail Party Effect."

Zion et al. do interesting experiments, with an array of sub-dural electrodes implanted in surgical epilepsy patients, to show what is happening as we attend to one of several simultaneously talking voices. From a commentary by Miller:
The data clearly show that both low-frequency phase (delta-theta, 1–7 Hz) and high gamma power (70–150 Hz) yield consistent trial-to-trial responses to speech. Other frequency bands do not, nor does low frequency power—adding weight to the argument that speech tracking is partly due to entrainment of endogenous rhythms. However, these effects are not equally distributed across cortical areas. The high-gamma tracking tends to be clustered in the superior temporal lobe and the low-frequency phase response is more widespread, including superior and anterior temporal regions and inferior parietal and frontal lobes. Across electrodes though, both the low-frequency phase and high-gamma power showed more consistent responses to the attended versus the ignored speech. Corroborating this observation, speech envelope acoustics could only be reconstructed from neural responses for the attended talker, not the unattended.
The Zion et al. abstract:
The ability to focus on and understand one talker in a noisy social environment is a critical social-cognitive capacity, whose underlying neuronal mechanisms are unclear. We investigated the manner in which speech streams are represented in brain activity and the way that selective attention governs the brain’s representation of speech using a “Cocktail Party” paradigm, coupled with direct recordings from the cortical surface in surgical epilepsy patients. We find that brain activity dynamically tracks speech streams using both low-frequency phase and high-frequency amplitude fluctuations and that optimal encoding likely combines the two. In and near low-level auditory cortices, attention “modulates” the representation by enhancing cortical tracking of attended speech streams, but ignored speech remains represented. In higher-order regions, the representation appears to become more “selective,” in that there is no detectable tracking of ignored speech. This selectivity itself seems to sharpen as a sentence unfolds.

Thursday, March 28, 2013

The perils of perfectionism, and the world we are losing.

I want to mention one of the many items in my queue of articles for potential posts that I have neglected so far.  Vgeny Morozov does a precis of his new book “To Save Everything, Click Here: The Folly of Technological Solutionism.”
Silicon Valley’s technophilic gurus and futurists have embarked on a quest to develop the ultimate patch to the nasty bugs of humanity…Facebook’s former marketing director, enthused about a trendy app to “crowdsource absolutely every decision in your life.” Called Seesaw, the app lets you run instant polls of your friends and ask for advice on anything…Jean-Paul Sartre, the existentialist philosopher who celebrated the anguish of decision as a hallmark of responsibility, has no place in Silicon Valley.
All these efforts to ease the torments of existence might sound like paradise to Silicon Valley. But for the rest of us, they will be hell. They are driven by a pervasive and dangerous ideology that I call “solutionism”: an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are “solvable” with a nice and clean technological solution at our disposal. Thus, forgetting and inconsistency become “problems” simply because we have the tools to get rid of them — and not because we’ve weighed all the philosophical pros and cons…Given Silicon Valley’s digital hammers, all problems start looking like nails, and all solutions like apps…Whenever technology companies complain that our broken world must be fixed, our initial impulse should be to ask: how do we know our world is broken in exactly the same way that Silicon Valley claims it is? What if the engineers are wrong and frustration, inconsistency, forgetting, perhaps even partisanship, are the very features that allow us to morph into the complex social actors that we are?
In the same apocalyptic spirit Edward Hoagland writes a lyrical elegy to the natural world we are losing:
Aesop, the fabulist and slave who, like Scheherazade, may have won his freedom by the magic of his tongue and who supposedly shared the Greek island of Samos with Pythagoras 2,500 years ago, nailed down our fellowship with other beasties of the animal kingdom. Yet we seem to have reached an apogee of separation since then. The problem is, we find ourselves quite ungovernable when operating solo, shredding our habitat, while hugging our dogs and cats as if for consolation and dieting on whole-food calories if we are affluent enough. Google Earth and genome games also lend us a fitful confidence that everything is under control.
It’s a steeplechase, hell-for-leather and exhilarating, for the highest stakes, but not knowing where we’re going. Call it progress or metastasizing, what we have done as a race, a species or a civilization is dumbfounding. Every inch of the planet is ours, we claim, and elements of clear improvement are intertwined with cancerous excess
…Aesopian metaphors were artesian if not prehistoric. The tortoise and the hare, the lion saved by the mouse, the monkey who would be king, the dog in the manger, the dog and his shadow, the country mouse and the city mouse, the wolf in sheep’s clothing, the raven and the crow, the heron and the fish, the peacock and the crane. From where will we draw replacement similes and language? Pop culture somersaults “bad” to mean good, “cool” to mean warm, and bustles and bodices segue into tank tops and cargo pants, as in a robust society they should. But will a natural keel remain, as we face multiflex, multiplex change? “Hogging” the spotlight, playing possum, resembling a deer in the headlights, being buffaloed or played like a fish: will the clarity of what is said hold? A “tiger,” a “turtle,” a “toad.” After the oceans have been vacuumed of protein and people are eating farmed tilapia and caked algae, will Aesop’s platform of markers remain?


Wednesday, March 27, 2013

Ambivalence and Body Movement

Schneider et al. make interesting observations about circulation correlations between our thoughts and body movements. We sway more from side to when we feel ambivalent about a choice or situation, and if we apply a swaying motion to our bodies, that makes us feel more ambivalent about a topic on which we are already uncertain.
Prior research exploring the relationship between evaluations and body movements has focused on one-sided evaluations. However, people regularly encounter objects or situations about which they simultaneously hold both positive and negative views, which results in the experience of ambivalence. Such experiences are often described in physical terms: For example, people say they are “wavering” between two sides of an issue or are “torn.” Building on this observation, we designed two studies to explore the relationship between the experience of ambivalence and side-to-side movement, or wavering. In a first study, we used a Wii Balance Board to measure movement and found that people who are experiencing ambivalence move from side to side more than people who are not experiencing ambivalence. In a second study, we induced body movement to explore the reverse relationship and found that when people are made to move from side to side, their experiences of ambivalence are enhanced.

Tuesday, March 26, 2013

If you use Google Reader to follow Deric's MindBlog, please read on....

This blog has ~2,500 RSS subscribers. As I write this, 2,240 subscribers are obtaining the feed via Feedfetcher, which is how Google grabs RSS or Atom feeds when users subscribe to them in Google Reader or iGoogle. Feedfetcher collects and periodically refreshes these user-initiated feeds.

Because it doesn't make the billions of dollars that Google requires for a service, Google has announced that it is shutting it down on July 1. If you want to continue getting MindBlog's RSS feed you will need to use an alternative reader. This Lifehacker post suggests five of the best alternatives - a few further suggestions are here. (I use my My Yahoo home page to obtain RSS feeds from blogs I follow.) Also, a note: There are more petitions floating around the web to keep Google Reader alive than it is possible to count.
(added note, comment from blog re3ader: "It should be noted that you don't need to use unreliable web services to aggregate rss/atom feeds. There are lots of excellent software programs to do so. By using actual software instead of a third party web service you have both insured access and offline access. I use rssowl, a program that runs on the big three OSes (http://www.rssowl.org/)." )

Monday, March 25, 2013

Are there trendy parts of the brain?

Behrens et al. do an interesting analysis, asking:
Are there really trendy parts of the brain? Or does each scientist falsely believe their own research area to be underrepresented in the top journals, and their friend's recent Nature paper to be the result of a passing fad? The maturity of functional brain imaging allows us to perform a rigorous test of this instinctual feeling. There have now been many thousands of imaging papers published across the journal spectrum. Are some brain regions really overrepresented in this literature? In addition, are papers reporting activation in some brain regions preferentially published in high-impact journals, whereas others are published in low-impact ones? To answer these questions, we examined 7342 functional contrasts published between 1985 and 2008 and documented in the BrainMap database.

Figure - (a) Distributions of activation frequency across the brain. Popular voxels are portrayed in red; unpopular ones in blue. (b) Frequency distribution of keywords describing experimental domains, paradigms, and functional contrasts. The size of each word is proportional to its frequency in the BrainMap database.
Journal impact factor strongly predicted activity in several different brain areas. With one exception in the primary visual cortex, we suspect these brain regions would largely confirm anecdotal hypotheses. For example, researchers who find activity in a prescribed part of the fusiform gyrus should be confident of having their article selected for publication in a high-impact journal, perhaps due to the role of the region in face processing. Other regions with proposed roles in emotional processing returned similarly stellar performances, including both the ventral and dorsal portions of the rostral medial prefrontal cortex, the anterior insular cortex, the anterior cingulate gyrus, and the amygdala. The recent interest in reward prediction errors might explain impactful peaks in the mid-brain and ventral striatum, areas that exhibited independent significant effects of impact factor, publication date, and their interaction: studies reporting activation in these regions are published in high-impact journals, and are increasing in number (as a proportion of all studies) over time.

Friday, March 22, 2013

A Cornucopia of Mind Blog sites.

Scientific American has announced that its daughter magazine Scientific American Mind has set up a Blogs site that lists a number of psychology, neuroscience, etc. blogs dealing with the Mind. Just starting to sample from the blogs listed is an overwhelming experience. Keeping up with blogs dealing with mind in the current blogosphere would be more than a full time job. My own list of "Other Mind Blogs" in the right hand column of this blog is several years old, and now doesn't include many excellent current efforts. I occasionally find this blog listed in "Top Psychology Blogs" on other sites.  We could all easily spend all our time "taking in each other's laundry," to become aggregators of aggregators of aggregators ad infinitum. This is why I don't look much at other Mind Blogs, but rather stick to looking through recent original research articles in the major journals. The efforts of each of us who labor away passing on some small fraction of the research world are appreciated by a sufficient number of readers to motivate us to continue.

Having said I don't look at other mind blogs,  I'll pass on this gem from "Brain Pickings" (click to enlarge and see text more clearly: "Friendship-The Silent Places-Where Speech Ends"

Thursday, March 21, 2013

The brain basis of our superiority illusion.

One of the most robustly documented findings of psychology is the "optimism" bias, which leads us to put rose-colored glasses on past, future, and our own abilities. (Did you know that a spectacular 94% of college professors rate themselves to have teaching abilities that are above average?.) Equally well documented is the fact the people who have a fully realistic view of their abilities and their importance to groups tend to be depressed. It seems clear that most of us are completely unequipped to function without a vast array of positive delusions about our abilities, our futures, etc.

There is a large literature on this. Dan Dennett and McKay have written a treatise in Brain and Behavioral Science that examines possible evolutionary rationales for mistaken beliefs, bizarre delusions, instances of self-deception, etc., they conclude that only positive illusions meet their criteria for being adaptive. Johnson and his colleagues have produced an evolutionary model suggesting that overconfidence maximizes individual fitness and that populations tend to become overconfident as long as benefits from contested resources are sufficiently large compared with the cost of competition.

 Yamada et al. now look at resting state functional connectivity between brain regions whose activity correlates with the superiority illusion. Their abstract, and one figure from their paper:
The majority of individuals evaluate themselves as superior to average. This is a cognitive bias known as the “superiority illusion.” This illusion helps us to have hope for the future and is deep-rooted in the process of human evolution. In this study, we examined the default states of neural and molecular systems that generate this illusion, using resting-state functional MRI and PET. Resting-state functional connectivity between the frontal cortex and striatum regulated by inhibitory dopaminergic neurotransmission determines individual levels of the superiority illusion. Our findings help elucidate how this key aspect of the human mind is biologically determined, and identify potential molecular and neural targets for treatment for depressive realism.

Influence of striatal D2 availability on superiority illusion is mediated through dorsal anterior cingulate - striatal functional connectivity. Assuming an inverse relationship between D2 receptor availability and presynaptic dopamine release, dopamine likely acts on striatal D2 receptors to suppress functional connectivity between the dorsal striatum and dACC (2). This connectivity predicts individual differences in the superiority illusion  The indirect effect of striatal D2 receptor availability on the superiority illusion is significantly mediated through dACC-striatal functional connectivity . “+” indicates a positive relationship; “–,” a negative relationship.

Wednesday, March 20, 2013

Would Tarzan believe in God?

Some clips from Konika Banerjee and Paul Bloom:
Would someone raised without exposure to religious views nonetheless come to believe in the existence of God, an afterlife, and the intentional creation of humans and other animals? Many scholars would answer yes, proposing that universal cognitive biases generate religious ideas anew within each individual mind. Drawing on evidence from developmental psychology, we argue here that the answer is no: children lack spontaneous theistic views and the emergence of religion is crucially dependent on culture.
...if universal, early-emerging cognitive biases generate religious ideas, we would expect to see these ideas emerge spontaneously. This would be akin to the process of creolization, such as when deaf children who are exposed to non-linguistic communication systems create their own sign language. However, such cases are, as best we know, non-existent. There are many examples where children are quick to endorse religious beliefs, often surprising their atheist parents. But this is consistent with receptivity, not generativity, as these beliefs correspond to those endorsed within the social environment in which children are raised.
Findings from developmental psychology support the following theory of the emergence of religious belief: humans possess a suite of sophisticated cognitive adaptations for social life, which make accessible certain concepts that are associated with religion, including design, purpose, agency, and body–soul dualism. However, more is needed to generate fully-fledged, sustained, and conscious religious beliefs, including a belief in gods, in divine creation of natural entities, and in life after death. Such beliefs require cultural support.