Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts

Monday, September 26, 2016

A hedonism hub in the human brain.

A open source article from Zacharopoulos et al., who have found that people who rate hedonism as more important in their life have a larger globus pallidus (GP) in their left hemisphere:
Human values are abstract ideals that motivate behavior. The motivational nature of human values raises the possibility that they might be underpinned by brain structures that are particularly involved in motivated behavior and reward processing. We hypothesized that variation in subcortical hubs of the reward system and their main connecting pathway, the superolateral medial forebrain bundle (slMFB) is associated with individual value orientation. We conducted Pearson's correlation between the scores of 10 human values and the volumes of 14 subcortical structures and microstructural properties of the medial forebrain bundle in a sample of 87 participants, correcting for multiple comparisons (i.e.,190). We found a positive association between the value that people attach to hedonism and the volume of the left globus pallidus (GP).We then tested whether microstructural parameters (i.e., fractional anisotropy and myelin volume fraction) of the slMFB, which connects with the GP, are also associated to hedonism and found a significant, albeit in an uncorrected level, positive association between the myelin volume fraction within the left slMFB and hedonism scores. This is the first study to elucidate the relationship between the importance people attach to the human value of hedonism and structural variation in reward-related subcortical brain regions.

Thursday, September 08, 2016

Reason is not required for a life of meaning.

Robert Burton, former neurology chief at UCSF and a neuroscience author, has contributed an excellent short essay to the NYTimes philosophy series The Stone. A few clips:
Few would disagree with two age-old truisms: We should strive to shape our lives with reason, and a central prerequisite for the good life is a personal sense of meaning...Any philosophical approach to values and purpose must acknowledge this fundamental neurological reality: a visceral sense of meaning in one’s life is an involuntary mental state that, like joy or disgust, is independent from and resistant to the best of arguments...Anyone who has experienced a bout of spontaneous depression knows the despair of feeling that nothing in life is worth pursuing and that no argument, no matter how inspired, can fill the void. Similarly, we are all familiar with the countless narratives of religious figures “losing their way” despite retaining their formal beliefs.
As neuroscience attempts to pound away at the idea of pure rationality and underscore the primacy of subliminal mental activity, I am increasingly drawn to the metaphor of idiosyncratic mental taste buds. From genetic factors (a single gene determines whether we find brussels sprouts bitter or sweet), to the cultural — considering fried grasshoppers and grilled monkey brains as delicacies — taste isn’t a matter of the best set of arguments...If thoughts, like foods, come in a dazzling variety of flavors, and personal taste trumps reason, philosophy — which relies most heavily on reason, and aims to foster the acquisition of objective knowledge — is in a bind.
Though we don’t know how thoughts are produced by the brain, it is hard to imagine having a thought unaccompanied by some associated mental state. We experience a thought as pleasing, revolting, correct, incorrect, obvious, stupid, brilliant, etc. Though integral to our thoughts, these qualifiers arise out of different brain mechanisms from those that produce the raw thought. As examples, feelings of disgust, empathy and knowing arise from different areas of brain and can be provoked de novo in volunteer subjects via electrical stimulation even when the subjects are unaware of having any concomitant thought at all. This chicken-and-egg relationship between feelings and thought can readily be seen in how we make moral judgments...The psychologist Jonathan Haidt and others have shown that our moral stances strongly correlate with the degree of activation of those brain areas that generate a sense of disgust and revulsion. According to Haidt, reason provides an after-the-fact explanation for moral decisions that are preceded by inherently reflexive positive or negative feelings. Think about your stance on pedophilia or denying a kidney transplant to a serial killer.
After noting work of Libet and others showing that our sense of agency is an illusion - initiating an action occurs well after our brains have already started that action, especially in tennis players and baseball batters - Burton suggests that:
It is unlikely that there is any fundamental difference in how the brain initiates thought and action. We learn the process of thinking incrementally, acquiring knowledge of language, logic, the external world and cultural norms and expectations just as we learn physical actions like talking, walking or playing the piano. If we conceptualize thought as a mental motor skill subject to the same temporal reorganization as high-speed sports, it’s hard to avoid the conclusion that the experience of free will (agency) and conscious rational deliberation are both biologically generated illusions.
What then are we to do with the concept of rationality? It would be a shame to get rid of a term useful in characterizing the clarity of a line of reasoning. Everyone understands that “being rational” implies trying to strip away biases and innate subjectivity in order to make the best possible decision. But what if the word rational leads us to scientifically unsound conclusions?
Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time. (The history of science should serve as a cautionary tale for anyone tempted to believe in the persistent truth of untestable ideas).
Even so, I would hate to discard such truisms such as “know thyself” or “the unexamined life isn’t worth living.” Reason allows us new ways of seeing, just as close listening to a piece of music can reveal previously unheard melodies and rhythms or observing an ant hill can give us an unexpected appreciation of nature’s harmonies. These various forms of inquiry aren’t dependent upon logic and verification; they are modes of perception.

Tuesday, August 23, 2016

Slow motion increases perceived intent.

The abstract from interesting work of Caruso et al.
To determine the appropriate punishment for a harmful action, people must often make inferences about the transgressor’s intent. In courtrooms and popular media, such inferences increasingly rely on video evidence, which is often played in “slow motion.” Four experiments (n = 1,610) involving real surveillance footage from a murder or broadcast replays of violent contact in professional football demonstrate that viewing an action in slow motion, compared with regular speed, can cause viewers to perceive an action as more intentional. This slow motion intentionality bias occurred, in part, because slow motion video caused participants to feel like the actor had more time to act, even when they knew how much clock time had actually elapsed. Four additional experiments (n = 2,737) reveal that allowing viewers to see both regular speed and slow motion replay mitigates the bias, but does not eliminate it. We conclude that an empirical understanding of the effect of slow motion on mental state attribution should inform the life-or-death decisions that are currently based on tacit assumptions about the objectivity of human perception.

Friday, August 12, 2016

Why do people infer “ought” from “is”?

Tworek and Cimpian offer an interesting perspective, doing experiments illustrating how we ascribe intrinsic value to what is customary. I give the start of their introduction setting the context, and then their abstract:
In his dissent from the Supreme Court decision recognizing a federal constitutional right for people to marry a same-sex partner, Chief Justice Roberts noted that heterosexual marriage has been around “for millennia” in societies all over the world: “the Kalahari Bushmen and the Han Chinese, the Carthaginians and the Aztecs”. A possible reading of this remark is that we should take what is typical as a signpost for what is good—how things ought to be.1 Whatever the correct interpretation here, the tendency to move seamlessly from “is” to “ought” is a mainstay of everyday reasoning. However, the validity of such “is”-to-“ought” inferences (or ought inferences) is at best uncertain. The mere existence of a pattern of behavior does not, by itself, reveal that the behavior is good.2 For instance, slavery and child labor were common throughout history, and still are in some parts of the world, yet it does not follow that people ought to engage in these practices. Why, then, do people frequently draw ought inferences and find them persuasive?
Abstract
People tend to judge what is typical as also good and appropriate—as what ought to be. What accounts for the prevalence of these judgments, given that their validity is at best uncertain? We hypothesized that the tendency to reason from “is” to “ought” is due in part to a systematic bias in people’s (nonmoral) explanations, whereby regularities (e.g., giving roses on Valentine’s Day) are explained predominantly via inherent or intrinsic facts (e.g., roses are beautiful). In turn, these inherence-biased explanations lead to value-laden downstream conclusions (e.g., it is good to give roses). Consistent with this proposal, results from five studies (N = 629 children and adults) suggested that, from an early age, the bias toward inherence in explanations fosters inferences that imbue observed reality with value. Given that explanations fundamentally determine how people understand the world, the bias toward inherence in these judgments is likely to exert substantial influence over sociomoral understanding.

Monday, July 18, 2016

Insurance makes sellers of services dishonest.

This piece from Kerschbamer et al. really resonates with my experiences in Florida, scam central for health insurance fraud:
Honesty is a fundamental pillar for cooperation in human societies and thus for their economic welfare. However, humans do not always act in an honest way. Here, we examine how insurance coverage affects the degree of honesty in credence goods markets. Such markets are plagued by strong incentives for fraudulent behavior of sellers, resulting in estimated annual costs of billions of dollars to customers and the society as a whole. Prime examples of credence goods are all kinds of repair services, the provision of medical treatments, the sale of software programs, and the provision of taxi rides in unfamiliar cities. We examine in a natural field experiment how computer repair shops take advantage of customers’ insurance for repair costs. In a control treatment, the average repair price is about EUR 70, whereas the repair bill increases by more than 80% when the service provider is informed that an insurance would reimburse the bill. Our design allows decomposing the sources of this economically impressive difference, showing that it is mainly due to the overprovision of parts and overcharging of working time. A survey among repair shops shows that the higher bills are mainly ascribed to insured customers being less likely to be concerned about minimizing costs because a third party (the insurer) pays the bill. Overall, our results strongly suggest that insurance coverage greatly increases the extent of dishonesty in important sectors of the economy with potentially huge costs to customers and whole economies.

Friday, July 15, 2016

Who blames the victim?

Niemi and Young ask
What determines whether someone feels sympathy or scorn for the victim of a crime? Is it a function of political affiliation? Of gender? Of the nature of the crime? (If you are mugged on a midnight stroll through the park, some people will feel compassion for you, while others will admonish you for being there in the first place. If you are raped by an acquaintance after getting drunk at a party, some will be moved by your misfortune, while others will ask why you put yourself in such a situation.) Their experiments find that the critical factor lies in a particular set of moral values. Their experiments show that the more strongly you privilege loyalty, obedience and purity — as opposed to values such as care and fairness — the more likely you are to blame the victim.
The abstract:
Why do victims sometimes receive sympathy for their suffering and at other times scorn and blame? Here we show a powerful role for moral values in attitudes toward victims. We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity). Increased endorsement of binding values predicted increased ratings of victims as contaminated (Studies 1-4); increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators (Studies 2-3). Patterns persisted controlling for politics, just world beliefs, and right-wing authoritarianism. Experimentally manipulating linguistic focus off of victims and onto perpetrators reduced victim blame. Both binding values and focus modulated victim blame through victim responsibility attributions. Findings indicate the important role of ideology in attitudes toward victims via effects on responsibility attribution.

Thursday, June 09, 2016

Unethical amnesia

From Kouchaki and Gino:
Despite our optimistic belief that we would behave honestly when facing the temptation to act unethically, we often cross ethical boundaries. This paper explores one possibility of why people engage in unethical behavior over time by suggesting that their memory for their past unethical actions is impaired. We propose that, after engaging in unethical behavior, individuals’ memories of their actions become more obfuscated over time because of the psychological distress and discomfort such misdeeds cause. In nine studies (n = 2,109), we show that engaging in unethical behavior produces changes in memory so that memories of unethical actions gradually become less clear and vivid than memories of ethical actions or other types of actions that are either positive or negative in valence. We term this memory obfuscation of one’s unethical acts over time “unethical amnesia.” Because of unethical amnesia, people are more likely to act dishonestly repeatedly over time.

Monday, May 23, 2016

When philosophy lost its way.

Frodeman and Briggle offer a lament over the irreversible passing of the practice of philosophy as a moral endeavor, one that might offer a view of the good society apart from the prescriptions of religion. Some clips from their essay:
Before its migration to the university, philosophy had never had a central home. Philosophers could be found anywhere — serving as diplomats, living off pensions, grinding lenses, as well as within a university. Afterward, if they were “serious” thinkers, the expectation was that philosophers would inhabit the research university…This purification occurred in response to at least two events. The first was the development of the natural sciences, as a field of study clearly distinct from philosophy, circa 1870, and the appearance of the social sciences in the decade thereafter. ..The second event was the placing of philosophy as one more discipline alongside these sciences within the modern research university. A result was that philosophy, previously the queen of the disciplines, was displaced, as the natural and social sciences divided the world between them.
Philosophers needed to embrace the structure of the modern research university, which consists of various specialties demarcated from one another. That was the only way to secure the survival of their newly demarcated, newly purified discipline. “Real” or “serious” philosophers had to be identified, trained and credentialed. Disciplinary philosophy became the reigning standard for what would count as proper philosophy.
Having adopted the same structural form as the sciences, it’s no wonder philosophy fell prey to physics envy and feelings of inadequacy. Philosophy adopted the scientific modus operandi of knowledge production, but failed to match the sciences in terms of making progress in describing the world. Much has been made of this inability of philosophy to match the cognitive success of the sciences. But what has passed unnoticed is philosophy’s all-too-successful aping of the institutional form of the sciences. We, too, produce research articles. We, too, are judged by the same coin of the realm: peer-reviewed products. We, too, develop sub-specializations far from the comprehension of the person on the street. In all of these ways we are so very “scientific.”
The act of purification accompanying the creation of the modern research university was not just about differentiating realms of knowledge. It was also about divorcing knowledge from virtue. Though it seems foreign to us now, before purification the philosopher (and natural philosopher) was assumed to be morally superior to other sorts of people. ..The study of philosophy elevated those who pursued it. Knowing and being good were intimately linked. It was widely understood that the point of philosophy was to become good rather than simply to collect or produce knowledge…The purification made it no longer sensible to speak of nature, including human nature, in terms of purposes and functions…By the late 19th century, Kierkegaard and Nietzsche had proved the failure of philosophy to establish any shared standard for choosing one way of life over another…There was a brief window when philosophy could have replaced religion as the glue of society; but the moment passed. People stopped listening as philosophers focused on debates among themselves.
Once knowledge and goodness were divorced, scientists could be regarded as experts, but there are no morals or lessons to be drawn from their work. Science derives its authority from impersonal structures and methods, not the superior character of the scientist. The individual scientist is no different from the average Joe, with no special authority to pronounce on what ought to be done…philosophy has aped the sciences by fostering a culture that might be called “the genius contest.” Philosophic activity devolved into a contest to prove just how clever one can be in creating or destroying arguments. Today, a hyperactive productivist churn of scholarship keeps philosophers chained to their computers. Like the sciences, philosophy has largely become a technical enterprise, the only difference being that we manipulate words rather than genes or chemicals. Lost is the once common-sense notion that philosophers are seeking the good life — that we ought to be (in spite of our failings) model citizens and human beings. Having become specialists, we have lost sight of the whole. The point of philosophy now is to be smart, not good. It has been the heart of our undoing.

Wednesday, January 13, 2016

Purity, Disgust and Donald Trump

Following the thread of the past few posts (on the partisan divide and empathy), I want to pass on this must-read article by Thomas Edsall, who summarizes work of academic researchers probing the role of purity/disgust, order/chaos, anger and fear, in the electorate. Here is a great graphic from the article:


Monday, January 11, 2016

How learning shapes the empathic brain.

From Hein et al.:

Abstract
Deficits in empathy enhance conflicts and human suffering. Thus, it is crucial to understand how empathy can be learned and how learning experiences shape empathy-related processes in the human brain. As a model of empathy deficits, we used the well-established suppression of empathy-related brain responses for the suffering of out-groups and tested whether and how out-group empathy is boosted by a learning intervention. During this intervention, participants received costly help equally often from an out-group member (experimental group) or an in-group member (control group). We show that receiving help from an out-group member elicits a classical learning signal (prediction error) in the anterior insular cortex. This signal in turn predicts a subsequent increase of empathy for a different out-group member (generalization). The enhancement of empathy-related insula responses by the neural prediction error signal was mediated by an establishment of positive emotions toward the out-group member. Finally, we show that surprisingly few positive learning experiences are sufficient to increase empathy. Our results specify the neural and psychological mechanisms through which learning interacts with empathy, and thus provide a neurobiological account for the plasticity of empathic reactions.

Wednesday, May 13, 2015

Human Purpose

The recent NYTimes David Brooks Op-Ed piece “What is your purpose?” has drawn a lot of feedback and comment. He laments the passing the era of lofty authority figures like Reinhold Niebuhr who argued for a coherent moral ecology. (I remember as a Quincy House Harvard undergraduate in 1962 having breakfast with Reinhold and Ursula Niebuhr during their several weeks residence in the house.)
These days we live in a culture that is more diverse, decentralized, interactive and democratized…Public debate is now undermoralized and overpoliticized…Intellectual prestige has drifted away from theologians, poets and philosophers and toward neuroscientists, economists, evolutionary biologists and big data analysts. These scholars have a lot of knowledge to bring, but they’re not in the business of offering wisdom on the ultimate questions.
And there you have it. Per Thomas Wolfe, “You Can’t Go Home Again.” Brooks’ "neuroscientists, economists, evolutionary biologists and big data analysts" have let the genie out of the bottle. We are very clear now that “purpose” is a human invention in the service of passing on our genes. I have seen no more clear statement on purpose than that given by E. O. Wilson, which I excerpted in my Dec. 5, 2014 post.

Tuesday, May 12, 2015

Trust and the Insula

From Belfi et al:
Reciprocal trust is a crucial component of cooperative, mutually beneficial social relationships. Previous research using tasks that require judging and developing interpersonal trust has suggested that the insula may be an important brain region underlying these processes. Here, using a neuropsychological approach, we investigated the role of the insula in reciprocal trust during the Trust Game (TG), an interpersonal economic exchange. Consistent with previous research, we found that neurologically normal adults reciprocate trust in kind, i.e., they increase trust in response to increases from their partners, and decrease trust in response to decreases. In contrast, individuals with damage to the insula displayed abnormal expressions of trust. Specifically, these individuals behaved benevolently (expressing misplaced trust) when playing the role of investor, and malevolently (violating their partner's trust) when playing the role of the trustee. Our findings lend further support to the idea that the insula is important for expressing normal interpersonal trust, perhaps because the insula helps to recognize risk during decision-making and to identify social norm violations.

Monday, April 13, 2015

Manipulating moral decisions by exploiting eye gaze.

Here is a fascinating piece of work from Pärnamets et al.:
Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals’ decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants’ eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.
We hypothesized that participants’ eye gaze reveals their decision process owing to general coupling between sensorimotor decision processes. By using a gaze-contingent probe and selecting when a decision is prompted the resulting choice can be biased toward a randomly predetermined option.

Wednesday, September 24, 2014

Morality in real life versus the lab.

The majority of studies on morality have used artificial controlled laboratory settings where study participants respond to presented moral issues (such as the famous speeding trolley dilemma: Should you sacrifice one person to save five?). Hoffman et al. have now gone into the real world homeostasis of morality, by calling volunteer study participants on their cell phones at random times to note moral acts that they committed or were the target of, that they witnessed directly, or that they heard about. This allowed them to analyze daily dynamics in a way not possible in the laboratory studies. Their findings confirmed several laboratory studies on moral contagion (receiving a good deed makes it more likely for us to give one), moral licensing (doing good entitles a bit of doing bad), political differences in moral values and concerns, and overoptimistically predicting one's own future moral behavior but accurately predicting the not-so-moral future behavior of others. They found little difference in daily moral behavior between religious and nonreligious people.
The science of morality has drawn heavily on well-controlled but artificial laboratory settings. To study everyday morality, we repeatedly assessed moral or immoral acts and experiences in a large (N = 1252) sample using ecological momentary assessment. Moral experiences were surprisingly frequent and manifold. Liberals and conservatives emphasized somewhat different moral dimensions. Religious and nonreligious participants did not differ in the likelihood or quality of committed moral and immoral acts. Being the target of moral or immoral deeds had the strongest impact on happiness, whereas committing moral or immoral deeds had the strongest impact on sense of purpose. Analyses of daily dynamics revealed evidence for both moral contagion and moral licensing. In sum, morality science may benefit from a closer look at the antecedents, dynamics, and consequences of everyday moral experience.

Wednesday, September 03, 2014

Morality - how theists and non theists differ

Shariff et al offer an interesting review of the ways in which the moral concerns of theists and nontheists both overlap and are different, as a result of psychological differences in social investment, motivations for prosocial behavior, meta-ethics, and cognitive styles. Some clips:
Many aspects of religions – such as their emphasis on credibility-enhancing displays of commitment – serve to create an ideologically aligned and cohesive ingroup… this tighter social connection may also lead to more parochial moral attitudes – selectively favoring the ingroup and actively derogating the outgroup.
Religious groups exert strong pressure on group members to conform to the requirements and moral ideals of the community. Although the drive to appear virtuous to others is all but universal, it is especially pronounced among theists. An extensive meta-analysis found theists scoring consistently higher than nontheists on measures of socially desirable responding…A recent meta-analysis revealed that nontheists, by contrast, are generally unaffected by invocations of supernatural agents; compared with baseline, nontheists tend to be no more prosocial when primed with god concepts…Nontheists do, however, show increases in prosocial behavior when primed with concepts relating to secular institutions, such as courts and the police.
For believers, God is not just the ultimate arbiter of justice, but the author of morality itself. This meta-ethical belief provides theists with a unique foundation for thinking about moral issues, distinct from their nonreligious counterparts. Recent research suggests that theists are moral objectivists; that is, they tend to believe that when two people disagree about a moral issue, only one person can be correct…religious individuals appear to moralize a wider range of actions beyond those pertaining to harm and injustice, including disobedience of authority, disloyalty to one's ingroup, and sexual impurity… By contrast, nontheists are more inclined than theists to view morality as subjective or culturally relative. Critically, however, this difference is more pronounced with regard to moral issues that have little to do with harm or injustice (e.g., sexual conduct).
Although theists and nontheists disagree whether obedience to authority or sexual impurity are morally relevant concepts, there is much greater consensus about moral issues involving harm and injustice. For example, both religious and nonreligious individuals take a predominantly deontological stance toward torture and both groups find acts of unjust harm (e.g., killing an innocent for no good reason) to be objectively wrong. All world religions defend some version of the Golden Rule, a doctrine that reflects evolved inclinations toward fairness and reciprocity. Recent studies suggest that individuals, independent of religion, exhibit an impulse to behave cooperatively and that they manage to override this immediate prosocial impulse only on further reflection. This universal preference toward prosociality is apparent even in infancy. Thus, although theists and nontheists may be divided through differences in sociality, earthly and supernatural reputational concerns, and meta-ethics, the two groups are united in what could be considered ‘core’ intuitive preferences for justice and compassion. Although the two groups may sometimes disagree about which groups or individuals deserve justice or their compassion, these core moral intuitions form the best basis for mutual understanding and intergroup conciliation.

Friday, August 29, 2014

The origins of morality.

Mark Johnson has generated some creative and seminal ideas in his books "The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason" and, with George Lakoff "Metaphors We Live By." I pass on a few clips from Les Beldo's review of his most recent book, "Morality for Humans Ethical Understanding from the Perspective of Cognitive Science":
Over the past 25 years, a growing number of cognitive scientists have taken it as their mission to find an empirical basis within brain science for the distinctive character of moral judgments. Investigators such as Marc Hauser, Steven Pinker, and Jonathan Haidt have posited the existence of an innate, domain-specific moral faculty in humans, be it a “universal moral grammar,” a “moral instinct,” or an “intuitive ethics.”
In Morality for Humans, Mark Johnson introduces an approach he calls “moral naturalism.” It is committed to the idea that moral knowledge does not exist on some separate plane but rather in the everyday habits, practices, institutions, and “bio-regulation” of lived organisms. Johnson is skeptical, however, of claims about the existence of a moral module in the human brain. He notes that “there are simply far too many complexly interacting multifunctional systems … in our intuitive moral judgments for there to be anything remotely resembling a distinct moral faculty.” Although he never uses the term, Johnson argues that moral problem-solving relies entirely on cognitive processes like logic, empathy, or narration. He sees the idea of an innate moral faculty as just another attempt to prove the existence of immutable moral laws, not in divine will or in pure reason but in a strong normative reading of cognitive science and evolutionary biology.
Johnson locates the source of values in our social and biological needs, cultural representations, and personal experiences. Here, Johnson has no qualms about violating the so-called naturalistic fallacy, which suggests that normative statements about how things ought to be cannot be derived from factual statements about what is. He moves freely between descriptions of human needs and human tendencies, on the one hand, and the normative suggestion that we ought to fulfill those needs and support those tendencies. Dismissing the naturalistic fallacy is easy if one thinks of it as an esoteric philosophical concept. But the term refers to a real logical problem of which Johnson is in fact acutely aware: “the fact that we have come to value certain states of affairs,” he writes, “is no guarantee that we should value them in the way we do.” This has been a particular weakness of studies that would make normative claims based on findings in cognitive neuroscience. How can descriptions of how our brains work tell us anything about what we ought to do in particular situations? It is a problem Johnson never resolves.
Morality is typically distinguished from other domains of social judgment by its unconditionality. A moral judgment refers to something that is considered right or wrong in and of itself. Johnson rejects the idea that moral judgments are unconditional, saying instead that the “trumping force” of morality owes to the fact that “certain things tend to matter more for us because they are thought to be necessary for the well-being of ourselves and others.” Individual well-being and societal cohesion are practical ends, however, and concerns about achieving them are matters of prudent conduct and prudent governance. This, along with Johnson's repeated insistence that moral problem-solving is no different in kind from any other form of problem-solving, leads one to wonder why he bothers to retain the concept of “morality” at all. Johnson suggests that values exist only in relation to some predefined or agreed-upon set of goods, feelings, or human needs, but that still creates fertile ground for hypothetical imperatives that are binding upon anyone who accepts the most basic premises of society. Why is this not enough? Is the stigma of moral relativism so frightening? What's so bad about prudence?

Monday, August 25, 2014

Origins of good and evil in human babies.

Felix Warneken does a review in TICS (Trends in Cognitive Sciences) of Paul Bloom new book "Just Babies: The Origins of Good and Evil," which argues that humans, already in the first year of life, have a basic moral sense that is shaped by innate evolved processes.
Bloom...reviews studies in which babies can choose to touch one of two geometrically shaped agents with googly eyes – and they prefer to touch one who helps a struggling fellow up a hill rather than one who pushes that fellow down. This indicates that babies like helping and despise harming others, even if they are only a third party who observes how other people treat each other. Beyond their judgments of the actions of others, young children also display helpful tendencies in their own behavior. Bloom reviews the extensive work on toddlers as young as 18 months of age who display a tendency to comfort others who are in distress, and spontaneously help clumsy people by picking up dropped objects and holding doors open. Last but not least, preschool children seem to have a sense of fairness when faced with the task of divvying up desirable resources, with equality already serving as a guiding principle.
...although our basic, parochially bound moral sentiments come naturally to us without much effort, applying these principles to strangers takes some mental effort. It requires that we employ perspective-taking in our interactions with others...Expanding our moral circle to include strangers thus depends on socialization and abilities that develop only in late childhood...our evolutionarily evolved morality is prepared for kin and friends, but not for strangers. This can be seen in young children's ingroup biases and toddlers’ stranger anxiety.
Still,
...human children depend more on interactions beyond the immediate family than do our closest evolutionary relatives... Infants thus have to tolerate being handed around and interact with many unfamiliar people from early on. We might therefore expect infants to be open-minded and vigilant at the same time, creating their social circles in a more sophisticated manner than ducklings who follow either white feathers or men with white beards, whichever they see first. And it seems as if they do so, not in a naïve fashion, but in a sophisticated way that balances risk and opportunity.

Friday, July 04, 2014

Moral judgements depend on what language we’re speaking.

Costa et al. use the famous trolley problem to offer another example of the incredible power of the tribal or "us versus them" nature of our psychology. Studies have shown that this mentality (fundamental, for example, to the current chaos in the Middle East) emerges spontaneously in previously homogenous groups of young children as well as adults. In the trolley problem, the following scenario is presented to subjects: An approaching trolley is about to kill five people farther down the tracks. The only way to stop it is to push a large man off the footbridge and onto the tracks below. This will save the five people but kill the man. (It will not help if you jump; you are not large enough.) Do you push him? Costa et al. find that when people are presented with the trolley problem in a foreign language, they are more willing to sacrifice one person to save five than when they are presented with the dilemma in their native tongue. Their abstract:
Should you sacrifice one man to save five? Whatever your answer, it should not depend on whether you were asked the question in your native language or a foreign tongue so long as you understood the problem. And yet here we report evidence that people using a foreign language make substantially more utilitarian decisions when faced with such moral dilemmas. We argue that this stems from the reduced emotional response elicited by the foreign language, consequently reducing the impact of intuitive emotional concerns. In general, we suggest that the increased psychological distance of using a foreign language induces utilitarianism. This shows that moral judgments can be heavily affected by an orthogonal property to moral principles, and importantly, one that is relevant to hundreds of millions of individuals on a daily basis.

Tuesday, May 20, 2014

Morality and perception speed

Here is an interesting nugget... We are more likely to see a word flashed for a very brief interval if it has moral valence. Words related to morality can be identified after a 40-millisecond peek, but nonmoral words needed an extra 10 ms of exposure. From Gantman and Bavel:
Highlights
• We examined whether moral concerns shape awareness of ambiguous stimuli.
• Participants saw moral and non-moral words in a lexical decision task.
• Moral and non-moral words were matched on length, and frequency in lexicon.
• Participants correctly identified moral words more frequently than non-moral words.
• This experiments suggest that moral values may shape perceptual awareness.
Abstract
People perceive religious and moral iconography in ambiguous objects, ranging from grilled cheese to bird feces. In the current research, we examined whether moral concerns can shape awareness of perceptually ambiguous stimuli. In three experiments, we presented masked moral and non-moral words around the threshold for conscious awareness as part of a lexical decision task. Participants correctly identified moral words more frequently than non-moral words—a phenomenon we term the moral pop-out effect. The moral pop-out effect was only evident when stimuli were presented at durations that made them perceptually ambiguous, but not when the stimuli were presented too quickly to perceive or slowly enough to easily perceive. The moral pop-out effect was not moderated by exposure to harm and cannot be explained by differences in arousal, valence, or extremity. Although most models of moral psychology assume the initial perception of moral stimuli, our research suggests that moral beliefs and values may shape perceptual awareness.

Wednesday, April 16, 2014

Poor people judge more harshly.

From Pitesa and Thau:
In the research presented here, we tested the idea that a lack of material resources (e.g., low income) causes people to make harsher moral judgments because a lack of material resources is associated with a lower ability to cope with the effects of others’ harmful behavior. Consistent with this idea, results from a large cross-cultural survey (Study 1) showed that both a chronic (due to low income) and a situational (due to inflation) lack of material resources were associated with harsher moral judgments. The effect of inflation was stronger for low-income individuals, whom inflation renders relatively more vulnerable. In a follow-up experiment (Study 2), we manipulated whether participants perceived themselves as lacking material resources by employing different anchors on the scale they used to report their income. The manipulation led participants in the material-resources-lacking condition to make harsher judgments of harmful, but not of nonharmful, transgressions, and this effect was explained by a sense of vulnerability. Alternative explanations were excluded. These results demonstrate a functional and contextually situated nature of moral psychology.