Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts

Tuesday, March 11, 2014

Time, Money, and Morality

Gino and Mogilner note "It seems that a day does not go by without some unethical behavior by a politician, movie star, professional athlete, or high-ranking executive making the headlines. Although less sensational, revelations of cheating have also crept into the sciences, and continue to show up in classrooms, businesses, and marriages."

They proceed to reflect on unethical behavior in people who also strive to maintain a positive self-concept, with morality being central to peoples' self-image. They:
...focus on two triggers that may influence self-reflection and are ubiquitous enough in the environment to have a chance at instigating a widespread effect on unethical behavior: money and time...We specifically predicted that priming people to think about time, rather than money, would lead them to behave more ethically by encouraging them to reflect on who they are and making them more conscious of how they conduct themselves so as to maintain a positive self-image. We tested this hypothesis across four experiments in which we primed participants to think about time or money and observed their tendencies to cheat for monetary or personal gain.
In a first experiment they primed participants with money, time, or neither and then completed a numbers game in which they had the opportunity to cheat by overstating their performance, thereby taking unearned money. Participants primed in the money condition were more likely to cheat.

The second experiment used the numbers game with time or money primes but half the participants were told “This game is an intelligence test that is designed to assess your likelihood to be successful in the future.” and the other half told “This game is a personality test that is designed to assess what type of person you are.” As in the first experiment, participants threw their actual matrix work sheets into a recycle bin, so that they believed they could overreport their performance (i.e., cheat) without getting caught. "In actuality, as in Experiment 1, we were able to match participants’ work sheets with the collection slips on which they reported their performance." The result: "when the game was framed as an intelligence test did thinking about money lead to greater cheating than thinking about time. When the game was framed as a personality test, there was no difference in cheating between the money and time conditions.

A third experiment manipulated self-reflection with a mirror to find that when self-reflection was triggered through the use of a mirror, participants primed with money behaved the same way as those primed with time.

The fourth experiment suggested that "priming time reduces cheating by increasing self-reflection, and priming money increases cheating by lowering self-reflection. By measuring self-reflection directly through self-reports, this experiment provided further evidence for the hypothesized role of self-reflection as the psychological mechanism linking time, money, and morality."

Here is the abstract of the article:
Money, a resource that absorbs much daily attention, seems to be involved in much unethical behavior, which suggests that money itself may corrupt. This research examined a way to offset such potentially deleterious effects—by focusing on time, a resource that tends to receive less attention than money but is equally ubiquitous in daily life. Across four experiments, we examined whether shifting focus onto time can salvage individuals’ ethicality. We found that implicitly activating the construct of time, rather than money, leads individuals to behave more ethically by cheating less. We further found that priming time reduces cheating by making people reflect on who they are. Implications for the use of time primes in discouraging dishonesty are discussed.

Wednesday, January 22, 2014

The morning morality effect.

Here is an interesting tidbit from Kouchaki1 and Smith:
Are people more moral in the morning than in the afternoon? We propose that the normal, unremarkable experiences associated with everyday living can deplete one’s capacity to resist moral temptations. In a series of four experiments, both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.

Tuesday, December 17, 2013

My grandsons - making it in the brave new world - part I

I will be going soon to Austin Texas, to spend the holiday with my son's family, who live in the same house I grew up in. Every grandfather says this, but I have to also say a how incredulous I am at the vastly different a world my two year old grandson and his younger brother will face than the one I grew up in, a period of continuously expanding opportunities from the late 1940s to the late 1960s. The same Univ. of Wisconsin assistant professorship I took as a 27 year old would now go only to someone much older, who would most likely have to settle for a non-tenure position, if even that were available. With the partition of our economy into a service sector whose employees can't support a family and an educated, computer savvy, creative, managing elite, an extraordinary set of skill are now required to 'make it.' David Brooks presents a list of mental types that might thrive in a world in which we we must interface with intelligent machines:

Freestylers - who can play with the computer but know when to overrule it (as you sometimes overrule your GPS in neighborhoods you are familiar with).

Synthesizers - who surf vast amounts of data to crystallize a pattern or story.

Humanizers - who make the human-machine interplay feel more natural.

Conceptual engineers - who come up with creative methods to think about unexpected problems.

Motivators - who can inspire efforts on behalf of machines that are more naturally generated in the service of other humans.

Moralizers - who keep performance metrics from being reduced to productivity statistics that devalue personal moral traits like loyalty and end up destroying morale and social capital.

Greeters - who provide personalized services to the 15 percent of workers who 'make it' (have lots of disposable income).

Economizers - who advise the bottom 85 percent how to preserve rich lives on a small income.

Weavers - who try to deal with the social disintegration, disaffected lifestyles, that are a consequence of the inegalitarian facts of this new world.

Thursday, November 21, 2013

The biology of sacred values.

Frank Rose does a nice piece in the NYTimes, pointing to the work of Gregory Berns and others, on brain correlates of why financial incentives are irrelevant when “sacred values” are at stake. (As in the failure of financial incentives offered by the West in getting Iran to give up its “right” to enrich uranium for “peaceful” uses.) Attempts to offer money to get people to alter strongly held beliefs - with issues like gun control, abortion, Israeli or Palestinian rights to the West Bank of the Jordan - result in moral outrage, feelings of contamination, and a need for moral cleansing. Work by Berns and others suggests we have radically different ways of processing ordinary and sacred beliefs. Berns…
...took M.R.I. images of participants’ brains as he asked them to consider changing their personal beliefs in exchange for money. Would they trade their preference for dogs over cats? What about their belief in God? Would they be willing to kill an innocent person?
When participants were questioned about issues of the dog-or-cat variety, their brain scans showed activity in the parietal cortex — a region that’s thought to be involved in making cost-benefit calculations. But when asked about issues on which they declined to make a trade, entirely different parts of the brain were activated — systems that are associated with telling right from wrong and with storing and retrieving rules. The result, Professor Berns observes, could be a new way to gauge sacred values “that is not solely dependent on self-report.”
Are we going to start running international negotiators through an M.R.I. machine to see where they’re processing the issues? [and determine whether or not someone is faking it when they claim sacred values] Highly unlikely. But results like Professor Berns’s might at least disprove the idea, still held by many, that every belief has its price. Given the intensely negative emotions that financial incentives can trigger, this might be a good lesson to learn.
Here is the abstract from the Berns et al work "The price of your soul: neural evidence for the non-utilitarian representation of sacred values" to which the above is referring. It gives a bit more detail on the brain correlates of sacred values:
Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Wednesday, September 04, 2013

Interesting…thinking about science enhances moral behavior.

There have been a number of high profile cases of dishonesty in reporting scientific results over the past ten years, but these errors in the conduct of the scientific method may not change public attitude towards scientific method itself. Ma-Kellams and Blascovich, using the usual cohort of university undergraduates as subjects (this time at Univ. of California Santa Barbara) show that exposure to science and experimental primes of science increase the likelihood of enforcing moral norms, Thinking about science had a moralizing effect on several domains: interpersonal violations, prosocial intentions, and economic exploitation. The experiments tested whether inducing thoughts about science could influence both reported, as well as actual, moral behavior by "priming" students with expose words like “logical,” “hypothesis,” “laboratory” and “theory.” They then judged the severity of a date-rape transgression, determined the level of altruistic activities they intended over the next month, and did a behavioral economic game that measures the level of altruistic motivation. The author's conclusions:
These studies demonstrated the morally normative effects of lay notions of science. Thinking about science leads individuals to endorse more stringent moral norms and exhibit more morally normative behavior. These studies are the first of their kind to systematically and empirically test the relationship between science and morality. The present findings speak to this question and elucidate the value-laden outcomes of the notion of science.

Tuesday, October 09, 2012

Young children and adults: intrinsically motivated to see others helped

This interesting piece from Tomasello and collaborators:
Young children help other people, but it is not clear why. In the current study, we found that 2-year-old children’s sympathetic arousal, as measured by relative changes in pupil dilation, is similar when they themselves help a person and when they see that person being helped by a third party (and sympathetic arousal in both cases is different from that when the person is not being helped at all). These results demonstrate that the intrinsic motivation for young children’s helping behavior does not require that they perform the behavior themselves and thus “get credit” for it, but rather requires only that the other person be helped. Thus, from an early age, humans seem to have genuine concern for the welfare of others.
And, Rand et al. use economic games with adult subjects to demonstrate that cooperation is intuitive, because cooperative heuristics are developed in daily life where cooperation is typically advantageous. This data adds to Kahneman's recent summary of evidence that much of human decision-making is governed by fast and automatic intuitions, rather than by slow, effortful thinking (see Kahneman, D. Thinking, Fast and Slow, Allen Lane, 2011). The Rand et al. abstract
Cooperation is central to human social behaviour. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.

Thursday, October 04, 2012

Regulators of prosocial and empathetic behavior.

Two pieces in the September issue of Psychological Science deal with empathetic or prosocial behavior. Grant and Dutton make observations showing that prosocial behavior is boosted more by reflecting on giving than on receiving; and Lewis et al.'s experiments give a example of how stereotypes enhance empathetic accuracy.
Grant and Dutton:
Research shows that reflecting on benefits received can make people happier, but it is unclear whether or not such reflection makes them more helpful. Receiving benefits can promote prosocial behavior through reciprocity and positive affect, but these effects are often relationship-specific, short-lived, and complicated by ambivalent reactions. We propose that prosocial behavior is more likely when people reflect on being a benefactor to others, rather than a beneficiary. The experience of giving benefits may encourage prosocial behavior by increasing the salience and strength of one’s identity as a capable, caring contributor. In field and laboratory experiments, we found that participants who reflected about giving benefits voluntarily contributed more time to their university, and were more likely to donate money to natural-disaster victims, than were participants who reflected about receiving benefits. When it comes to reflection, giving may be more powerful than receiving as a driver of prosocial behavior.
Lewis et al.:
An ideal empathizer may attend to another person’s behavior in order to understand that person, but it is also possible that accurately understanding other people involves top-down strategies. We hypothesized that perceivers draw on stereotypes to infer other people’s thoughts and that stereotype use increases perceivers’ accuracy. In this study, perceivers (N = 161) inferred the thoughts of multiple targets. Inferences consistent with stereotypes for the targets’ group (new mothers) more accurately captured targets’ thoughts, particularly when actual thought content was also stereotypic. We also decomposed variance in empathic accuracy into thought, target, and perceiver variance. Although past research has frequently focused on variance between perceivers or targets (which assumes individual differences in the ability to understand other people or be understood, respectively), the current study showed that the most substantial variance was found within targets because of differences among thoughts.

Friday, September 21, 2012

Signing beforehand increases honesty.

Here is a fascinating simple study by Shu et al. Simply signing a statement of honesty at the top rather the bottom of a form makes you more honest. They outline the problem:
The annual tax gap between actual and claimed taxes due in the United States amounts to roughly $345 billion. The Internal Revenue Service estimates more than half this amount is due to individuals misrepresenting their income and deductions (1). Insurance is another domain burdened by the staggering cost of individual dishonesty; the Coalition Against Insurance Fraud estimated that the overall magnitude of insurance fraud in the United States totaled $80 billion in 2006 (2). The problem with curbing dishonesty in behaviors such as filing tax returns, submitting insurance claims, claiming business expenses or reporting billable hours is that they primarily rely on self-monitoring in lieu of external policing.
Here is their abstract:
Many written forms required by businesses and governments rely on honest reporting. Proof of honest intent is typically provided through signature at the end of, e.g., tax returns or insurance policy forms. Still, people sometimes cheat to advance their financial self-interests—at great costs to society. We test an easy-to-implement method to discourage dishonesty: signing at the beginning rather than at the end of a self-report, thereby reversing the order of the current practice. Using laboratory and field experiments, we find that signing before, rather than after, the opportunity to cheat makes ethics salient when they are needed most and significantly reduces dishonesty.
The experimental design used several different measures of cheating: self-reported earnings (income) on a math puzzles task wherein university participants could cheat for financial gain, self reported travel expenses to the laboratory (deductions) claimed on a tax return form on research earnings. Another experiment was done in the field with an insurance company in the southeastern United States asking some of their existing customers to report their odometer reading. They examined the effect of requiring the signature at the top of the form, the bottom, or the control of requiring no signature.

Monday, September 10, 2012

Investing in Karma - When Wanting Promotes Helping

An interesting piece from Converse et al. provides reliable evidence that people do good deeds when they want something beyond their control. This suggests that they act in accord with a karmic tenet rooted in immanent justice (but doesn't necessarily imply pervasive explicit belief in karma.) Here is the abstract slightly edited:
People often face outcomes of important events that are beyond their personal control, such as when they wait for an acceptance letter, job offer, or medical test results. We suggest that when wanting and uncertainty are high and personal control is lacking, people may be more likely to help others, as if they can encourage fate’s favor by doing good deeds proactively. Four experiments support this karmic-investment hypothesis. The first two experiments show that when people want an outcome over which they have little control, their donations of time and money increase, but their participation in other rewarding activities does not. A third experiment shows that, in addition, at a job fair, job seekers who feel the process is outside (vs. within) their control make more generous pledges to charities. A final experiment shows that karmic investments increase optimism about a desired outcome. We conclude by discussing the role of personal control and magical beliefs in this phenomenon.
Some clips from their discussion:
Past research has found that people automatically anticipate negative outcomes following behaviors that tempt fate, and that people associate positive outcomes with virtuous behaviors. Thus, people may develop a basic good-behavior—good-outcome association, such that hoping for good outcomes activates the cognitive script to do good deeds...whether based on explicit or implicit belief, some version of a karmic belief system must be at least momentarily activated when people face important, uncontrollable outcomes...our findings fit with the notion that people turn to external sources of control, such as gods and governments, when internal control is lacking, and may even turn to apparently magical systems when necessary...rather than increasing selfishness, wanting can increase helping...people may not only pursue reciprocal exchanges interpersonally, but may also attempt to bargain with the universe.

Monday, August 13, 2012

Culture of Empathy

I have finally taken time to look more thoroughly at a site noted in a comment to my July 25 post on compassion research. The "Culture of Empathy" site is an aggregator of resources and information about the values of empathy and compassion. It makes interesting, if a bit overwhelming, browsing. I feel like a complete trogdolyte as only now do I notice sites like CAUSES that hosts seven different empathy related causes that one can sign on to, listing the very same gentleman who commented on my post (Edwin Rutsch) as leader; or Scoop.it!, that hosts four different empathy related web based magazine hosted by, guess who?, Mr. Rutsch. Mr. Rutsch would also like you to join the Empathy Center Page on Facebook, and join him on Facebook Causes. This guy really gets around! The Culture of Empathy website lists summaries of a large number of interviews, book reviews, and conferences involving Mr. Rutsch, noting the neuroscience of empathy (things like mirror neurons, etc.), different cultural aspects of empathy, linguistics.... I guess its gotta be a good thing, but while fully thinking that my own behavior could certainly be leavened by a more empathetic bias, I'm overwhelmed by this web input to the point of inaction regarding social venues to support.

Friday, August 10, 2012

More on Haidt and Moral Psychology

I have finally finished a complete reading of Jonathan Haidt's book "The Righteous Mind" that I have mentioned in several previous posts (enter 'Haidt' in the search box in the left column). This complete reading is unusual for me; my normal behavior is to just read a review of a new book, or skip, hop, and skim through the Kindle version. Here I pass on some clips from Jost's recent review.
In The Righteous Mind, Haidt attempts to explain the psychological foundations of morality and how they lead to political conflicts. The book's three parts are not as compatible or settled as Haidt's ingenious prose makes them seem. The first revisits the intriguing arguments of an earlier, influential paper in which he argued that moral reasoning is nothing but post hoc rationalizing of gut-level intuitions. The second introduces an evolutionarily inspired framework that specifies five or six “moral foundations” and applies this framework to an analysis of liberal-conservative differences in moral judgments. In the third part, Haidt speculates that patriotism, religiosity, and “hive psychology” in humans evolved rapidly through group-level selection.
After arguing that “moral reasoning” is nothing more than a post hoc rationalization of intuitive, emotional reactions, Haidt risks contradiction when claiming that liberals should embrace conservative moral intuitions about the importance of obeying authority, being loyal to the ingroup, and enforcing purity standards. If one were to accept Haidt's post hoc rationalization premise and his findings about differences in the moral judgments of liberals and conservatives, a more parsimonious (and empirically supportable) conjunction would be: For a variety of psychological reasons, conservatives do more rationalizing of gut-level reactions, and this makes them more moralistic (i.e., judgmental) than liberals. It does not, however, make them more moral in any meaningful sense of the word, nor does it provide a legitimate basis for criticizing liberal moral judgment the way Haidt does.
Haidt argues that the liberal moral code is deficient, because it is not based on all of his “moral foundations.” The liberal, he maintains, is like the idiot restaurateur who thought he could make a complete cuisine out of just one taste, however sweet. This illustrates the biggest flaw in Haidt's book: he swings back and forth between an allegedly value-neutral sense of “moral” (anything that an individual or a group believes is moral and serves to suppress selfishness) and a more prescriptive sense that he uses mainly to jab liberals. Ultimately, Haidt's own rhetorical choices render his claim to being unbiased unconvincing. If descriptive morality is based on whatever people believe, then both liberals and conservatives would seem to have equal claim to it. Does it really make sense, philosophically or psychologically or politically, to try to keep score, let alone to assert that “more is better” when it comes to moral judgment?
Before drawing sweeping, profound conclusions about the politics of morality, Haidt needs to address a more basic question: What are the specific, empirically falsifiable criteria for designating something as an evolutionarily grounded moral foundation? Haidt sets the bar pretty low—anything that suppresses individual selfishness in favor of group interests. By this definition, the decision to plunder (and perhaps even murder) members of another tribe would count as a moral adaptation. Recent research suggests that Machiavellianism, authoritarianism, social dominance, and prejudice are positively associated with the moral valuation of ingroup, authority, and purity themes. If these are to be ushered into the ever-broadening tent of group morality, one wonders what it would take to be refused admission.

Monday, August 06, 2012

The MindBlog queue: moral responsibility; evolution of music; booze and hypnosis

During this period of relative inactivity for MindBlog, while I am pursuing other projects, I still accumulate references to work that looks interesting. Rather than letting them disappear into the list of potential posts that has accumulated by now to 50 pages of links, I’m going to post some of the links, with minimal descriptions, to make it possible for readers who find a favorite topic to click their way to the source.

Did your brain make you do it? Neuroscience and moral responsibility.
“Naïve dualism” is the belief that acts are brought about either by intentions or by the physical laws that govern our brains and that those two types of causes — psychological and biological — are categorically distinct. People are responsible for actions resulting from one but not the other. (In citing neuroscience, the Supreme Court may have been guilty of naïve dualism: did it really need brain evidence to conclude that adolescents are immature?)...Naïve dualism is misguided. “Was the cause psychological or biological?” is the wrong question when assigning responsibility for an action. All psychological states are also biological ones.
A better question is “how strong was the relation between the cause (whatever it happened to be) and the effect?” If, hypothetically, only 1 percent of people with a brain malfunction (or a history of being abused) commit violence, ordinary considerations about blame would still seem relevant. But if 99 percent of them do, you might start to wonder how responsible they really are.

Evolution of music by public choice
Music evolves as composers, performers, and consumers favor some musical variants over others. To investigate the role of consumer selection, we constructed a Darwinian music engine consisting of a population of short audio loops that sexually reproduce and mutate. This population evolved for 2,513 generations under the selective influence of 6,931 consumers who rated the loops’ aesthetic qualities. We found that the loops quickly evolved into music attributable, in part, to the evolution of aesthetically pleasing chords and rhythms. Later, however, evolution slowed. Applying the Price equation, a general description of evolutionary processes, we found that this stasis was mostly attributable to a decrease in the fidelity of transmission. Our experiment shows how cultural dynamics can be explained in terms of competing evolutionary forces.
Also check out:
Adaptive walks on the fitness landscape of music
and
Darwin Tunes on SoundCloud
Finally, this unrelated quirky fragment:
Booze enhances hypnotic susceptability

Thursday, March 29, 2012

The Righteous Mind

I want to point to two reviews of Jonathan Haidt's new book, which has the title of this post. It brings exceptional clarity to the definition of contemporary liberals and conservatives, and argues that it is the liberals who are not getting the point. First, some clips from Kristof's comments:
Jonathan Haidt, a University of Virginia psychology professor, argues that, for liberals, morality is largely a matter of three values: caring for the weak, fairness and liberty. Conservatives share those concerns (although they think of fairness and liberty differently) and add three others: loyalty, respect for authority and sanctity...Those latter values bind groups together with a shared respect for symbols and institutions such as the flag or the military...This year’s Republican primaries have been a kaleidoscope of loyalty, authority and sanctity issues...Americans speak about values in six languages, from care to sanctity. Conservatives speak all six, but liberals are fluent in only three...Moral psychology can help to explain why the Democratic Party has had so much difficulty connecting with voters.

From Saletan's review:
Haidt argues that people are fundamentally intuitive, not rational. If you want to persuade others, you have to appeal to their sentiments...We were never designed to listen to reason. When you ask people moral questions, time their responses and scan their brains, their answers and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided...The problem isn’t that people don’t reason. They do reason. But their arguments aim to support their conclusions, not yours. Reason doesn’t work like a judge or teacher, impartially weighing evidence or guiding us to wisdom. It works more like a lawyer or press secretary, justifying our acts and judgments to others...Haidt invokes an evolutionary hypothesis: We compete for social status, and the key advantage in this struggle is the ability to influence others. Reason, in this view, evolved to help us spin, not to help us learn. So if you want to change people’s minds, Haidt concludes, don’t appeal to their reason. Appeal to reason’s boss: the underlying moral intuitions whose conclusions reason defends.

We acquire morality the same way we acquire food preferences: we start with what we’re given. If it tastes good, we stick with it. If it doesn’t, we reject it. People accept God, authority and karma because these ideas suit their moral taste buds. Haidt points to research showing that people punish cheaters, accept many hierarchies and don’t support equal distribution of benefits when contributions are unequal...You don’t have to go abroad to see these ideas. You can find them in the Republican Party. Social conservatives see welfare and feminism as threats to responsibility and family stability. The Tea Party hates redistribution because it interferes with letting people reap what they earn. Faith, patriotism, valor, chastity, law and order — these Republican themes touch all six moral foundations, whereas Democrats, in Haidt’s analysis, focus almost entirely on care and fighting oppression. This is Haidt’s startling message to the left: When it comes to morality, conservatives are more broad-minded than liberals. They serve a more varied diet.

Is income inequality immoral? Should government favor religion? Can we tolerate cultures of female subjugation? And how far should we trust our instincts? Should people who find homosexuality repugnant overcome that reaction?..Haidt’s faith in moral taste receptors may not survive this scrutiny. Our taste for sanctity or authority, like our taste for sugar, could turn out to be a dangerous relic. But Haidt is right that we must learn what we have been, even if our nature is to transcend it.
Haidt's book references a number of experiments noted in MindBlog posts, on differences in the psychologies and autonomic nervous system reactivities of conservatives and liberals.

By the way, in this same vein, I might point to Chris Money's comments on his book "The Republican Brain," which he almost called "The Science of Truthiness," which asks why very intelligent Republicans deny scientific realities such as evolution and climate change.

Monday, March 19, 2012

Serotonin and reaction to unfairness.

How should one deal with line cutters? Or, more generally, what would you do if you faced unfair or wrong behavior? Studies have shown that machiavellian individuals accept unfair offers more often in ultimatum games (UG), using realism and opportunism to maximize their self-interest. 5-HT (serotonin) transmission, for which the dorsal raphe nucleus (DRN) is a major source, is important in brain regions such as the dorsal lateral prefrontal cortex that are recruited for this kind of cognitive control. Honest and trustful persons who cannot easily separate themselves from moral precepts tend to adhere to a norm of fairness and thus show lower tolerance of unfairness. Takahashi et al. have now used positron emission tomography to directly measure 5-HT transporters (5-HTT) and 5-HT1A receptors and find that low 5-HTT in the DRN is associated with straightforwardness and trust personality traits and predicts higher rejection rates of unfair offers in the ultimatum game. Here is their abstract:
How does one deal with unfair behaviors? This subject has long been investigated by various disciplines including philosophy, psychology, economics, and biology. However, our reactions to unfairness differ from one individual to another. Experimental economics studies using the ultimatum game (UG), in which players must decide whether to accept or reject fair or unfair offers, have also shown that there are substantial individual differences in reaction to unfairness. However, little is known about psychological as well as neurobiological mechanisms of this observation. We combined a molecular imaging technique, an economics game, and a personality inventory to elucidate the neurobiological mechanism of heterogeneous reactions to unfairness. Contrary to the common belief that aggressive personalities (impulsivity or hostility) are related to the high rejection rate of unfair offers in UG, we found that individuals with apparently peaceful personalities (straightforwardness and trust) rejected more often and were engaged in personally costly forms of retaliation. Furthermore, individuals with a low level of serotonin transporters in the dorsal raphe nucleus (DRN) are honest and trustful, and thus cannot tolerate unfairness, being candid in expressing their frustrations. In other words, higher central serotonin transmission might allow us to behave adroitly and opportunistically, being good at playing games while pursuing self-interest. We provide unique neurobiological evidence to account for individual differences of reaction to unfairness.

Correlation between rejection rate of unfair offers in UG and 5-HTT binding in DRN. (A) SPM image showing regions of negative correlation between rejection rate of unfair offers and 5-HTT binding in DRN. (B) Plots and regression line of correlation between rejection rate of unfair offers and 5-HTT binding in DRN (R = −0.50, P = 0.026). Dashed lines are 95% confidence interval boundaries.

Monday, July 04, 2011

MindBlog retrospective - 3rd and 1st person narrative in personality change

Over the past few weeks, I've been scanning the titles of old MindBlog posts (all 2,588 of them, taken 300 at a time because fatigue sets in very quickly in such an activity), glancing through the contents of the ones I recall as being most interesting to me, and assembling a list of ~80 reflecting some major themes. I am struck by the number of REALLY INTERESTING items that I had COMPLETELY forgotten about. (An illustration is the paragraph starting below with the graphic which is a repeat of a post from May 30, 2007.) It frustrates me that I have lost from recall so much good material. And, of course, it is also frustrating that the insight we do mange to retain will not necessarily change us (the subject of this March 23, 2006 post.)


Benedict Carey writes a piece in the Tuesday NY Times science section (PDF here) reviewing work done by a number of researchers on on how the stories people tell themselves (and others) about themselves do or don't help with making adaptive behavior changes. Third person narratives, in which subjects view themselves from a distance - as actors in their own narrative play - correlate with a higher sense of personal power and ability to make personality changes. First person narratives - in which the subject describes the experience of being immersed in their personal plays - are more likely than third person narratives to correlate with passivity and feeling powerless to effect change. This reminds me of Marc Hauser's distinction of being a moral agent or a moral patient. The third person can be a more metacognitive stance, thinking about oneself in a narrative script while the first person can be a less reflective acting out of the script.

Wednesday, May 04, 2011

Have your legal hearing just after the judge’s lunch break.

Fascinating observations by Danziger et al.:
Are judicial rulings based solely on laws and facts? Legal formalism holds that judges apply legal reasons to the facts of a case in a rational, mechanical, and deliberative manner. In contrast, legal realists argue that the rational application of legal reasons does not sufficiently explain the decisions of judges and that psychological, political, and social factors influence judicial rulings. We test the common caricature of realism that justice is “what the judge ate for breakfast” in sequential parole decisions made by experienced judges. We record the judges’ two daily food breaks, which result in segmenting the deliberations of the day into three distinct “decision sessions.” We find that the percentage of favorable rulings drops gradually from ≈65% to nearly zero within each decision session and returns abruptly to ≈65% after a break. Our findings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions.

Tuesday, April 12, 2011

Hurt the flesh, cleanse the soul....

Here are some summary, slightly edited, clips from an interesting study by Bastian et al. (performed on the usual batch of college undergraduates, paid $10 for their participation in the study):
Pain purifies. History is replete with examples of ritualized or self-inflicted pain aimed at achieving purification...When reminded of an immoral deed, people are motivated to experience physical pain. Student participants in the study who wrote about an unethical behavior not only held their hands in ice water longer but also rated the experience as more painful than did participants who wrote about an everyday interaction. Critically, experiencing pain reduced people’s feelings of guilt, and the effect of the painful task on ratings of guilt was greater than the effect of a similar but nonpainful task.

Pain has traditionally been understood as purely physical in nature, but it is more accurate to describe it as the intersection of body, mind, and culture. People give meaning to pain, and we argue that people interpret pain within a judicial model of pain as punishment. Our results suggest that the experience of pain has psychological currency in rebalancing the scales of justice—an interpretation of pain that is analogous to notions of retributive justice. Interpreted in this way, pain has the capacity to resolve guilt.

People are socialized to understand pain within this judicial framework. Physical pain is employed as a penalty (e.g., spanking children for misbehavior), and unexplained pain is often understood as punishment from God. The judicial model is explicit in the Latin word for pain, poena, which means “to pay the penalty.” Understood this way, pain may be perceived as repayment for sin in three ways. First, pain is the embodiment of atonement. Just as physical cleansing washes away sin, physical pain is experienced as a penalty, and paying that penalty reestablishes moral purity. Second, subjecting oneself to pain communicates remorse to others (including God) and signals that one has paid for one’s sins, and this removes the threat of external punishment. Third, tolerating the punishment of pain is a test of one’s virtue, reaffirming one’s positive identity to oneself and others.

Previous work has demonstrated that giving meaning to pain affects people’s management of that pain. By introducing the judicial model of pain, we emphasize that giving meaning to pain can also affect other psychological processes. Although additional research is needed, our findings demonstrate that experiencing pain as a penalty can cause people to feel that their guilt is resolved and their soul cleansed.

Friday, April 01, 2011

A bad taste in the mouth - more on embodied cognition and emotion

Eskine et al. provide yet another example of how emotion induced by a physical stimulus can influence a moral stance. (Previous mindblog postings have noted this effect for hard/soft or rough/smooth surfaces, hot/cold temperature stimuli, clean/dirty smells or visual images, etc.) The experimental subjects were the usual captive college psychology course undergraduates (54 of them in this case):
Can sweet-tasting substances trigger kind, favorable judgments about other people? What about substances that are disgusting and bitter? Various studies have linked physical disgust to moral disgust, but despite the rich and sometimes striking findings these studies have yielded, no research has explored morality in conjunction with taste, which can vary greatly and may differentially affect cognition. The research reported here tested the effects of taste perception on moral judgments. After consuming a sweet beverage, a bitter beverage, or water, participants rated a variety of moral transgressions. Results showed that taste perception significantly affected moral judgments, such that physical disgust (induced via a bitter taste) elicited feelings of moral disgust. Further, this effect was more pronounced in participants with politically conservative views than in participants with politically liberal views. Taken together, these differential findings suggest that embodied gustatory experiences may affect moral processing more than previously thought.

Thursday, February 03, 2011

The biology of morality

I have already pointed to a TED talk by Sam Harris, and thought I would pass on a few clips from a review of his related book, "The Moral Landscape - How Science Can Determine Human Values." On Harris:
...his dispensation is that “Faith, if it is ever right about anything, is right by accident.” In applying reason to questions of morality, Harris claims that we can define morality only as it relates to the well-being of conscious organisms and that such well-being is completely measurable using the methods of neurobiology. This suggests to him that any action can be clearly classified as moral (increasing well-being) or immoral (decreasing well-being) without ambiguity. However, it doesn't mean that there is only one answer to a question of morality. He contends that “the existence of multiple peaks on the moral landscape does not make them any less real or worthy of discovery. Nor would it make the difference between being on a peak and being stuck deep in a valley any less clear or consequential.” But Harris firmly disagrees with the moral relativist views that there is no clearly defined morality that cuts across different societies and that therefore all views of morality are equally meritorious. He writes, “Multiculturalism, moral relativism, political correctness, tolerance even of intolerance—these are the familiar consequences of separating facts and values on the left.” “My goal,” he states, “is to convince you that human knowledge and human values can no longer be kept apart.”

Harris isn't choosy when it comes to vilifying religions. He notes the willingness of many to ignore genocide or cases of sexual abuse within their churches while taking strong actions against individuals who perform abortions (or refuse to prohibit them). He also draws from history examples of undeniably immoral choices in the name of religion. Harris criticizes scientists for persisting in their faith and for failing to confront head-on a society that he thinks is mired in superstition.

Harris thinks too many scientists have compromised on principles. “Many of our secular critics worry that if we oblige people to choose between reason and faith, they will choose faith and cease to support scientific research.” Even the journal Nature upholds the idea of nonoverlapping magisteria of Gould. Harris complains, “It is one thing to be told that the pope is a peerless champion of reason and that his opposition to embryonic stem-cell research is both morally principled and completely uncontaminated by religious dogmatism; it is quite another to be told this by a Stanford physician who sits on the President's Council on Bioethics.”

One might conclude that although at one time the best way to define and enforce moral behavior was through revealed faith, as science and reason advance, we can chip away at the old edifice and build anew. Stories of a young-Earth creation now look rather untenable, but in the past they might have been the only way to instill awe and teach a new and meaningful moral code. Rather than nonoverlapping magisteria, the domains of science and religion are intermingling all the time. The Moral Landscape may represent a new beach-head in this quest.

Friday, October 29, 2010

de Waal on the Biology of Morality

A colleague pointed out this thoughtful piece on morals without God written by primatologist Frans de Waal.
The debate is less about the truth than about how to handle it. For those who believe that morality comes straight from God the creator, acceptance of evolution would open a moral abyss...but I am wary of anyone whose belief system is the only thing standing between them and repulsive behavior. Why not assume that our humanity, including the self-control needed for livable societies, is built into us? Does anyone truly believe that our ancestors lacked social norms before they had religion? Did they never assist others in need, or complain about an unfair deal? Humans must have worried about the functioning of their communities well before the current religions arose, which is only a few thousand years ago. Not that religion is irrelevant...but it is an add-on rather than the wellspring of morality.
de Waal gives an engaging review of his observations on primate behavior that show clear evidence for moral and altruistic behaviors that can not be linked to simple "selfish gene" models and he ends with this comment about monkey and chimpanzee behaviors:
...they strive for a certain kind of society. For example, female chimpanzees have been seen to drag reluctant males towards each other to make up after a fight, removing weapons from their hands, and high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as yet another sign that the building blocks of morality are older than humanity, and that we do not need God to explain how we got where we are today. On the other hand, what would happen if we were able to excise religion from society? I doubt that science and the naturalistic worldview could fill the void and become an inspiration for the good. Any framework we develop to advocate a certain moral outlook is bound to produce its own list of principles, its own prophets, and attract its own devoted followers, so that it will soon look like any old religion.