By now I've probably read at least 5 mixed (mainly positive) reviews of Mark Hauser's new book "Moral Minds," and the commentary on the book by Nicholas Wade in the 10/31/06 New York Times stirs me to mention it in a post.
Hauser suggests that we have a universal and innate capacity for developing moral rules analogous to our capacity to develop language in the presence of other humans. A more reserved review in an earlier New York Times Book Review Magazine by Richard Rorty raises some concerns: "The exuberant triumphalism of the prologue to “Moral Minds” leads the reader to expect that Hauser will lay out criteria for distinguishing parochial moral codes from universal principles, and will offer at least a tentative list of those principles. These expectations are not fulfilled......the reader is left guessing about how he proposes to distinguish morality not just from etiquette, but also from prudential calculation, mindless conformity to peer pressure and various other things. This makes it hard to figure out what exactly his moral module is supposed to do. It also makes it difficult to envisage experiments that would help us decide between his hypothesis and the view that all we need to internalize a moral code is general-purpose learning-from-experience circuitry — the same circuitry that lets us internalize, say, the rules of baseball."
A review by Bloom and Jarudi in Nature makes some further points: "Certain deep parallels between language and morality make Hauser's proposal worth taking seriously. ....In other regards, however, language seems very different from morality. For one thing, linguistic knowledge is distinct from emotion. You might be disgusted or outraged by what somebody says, but the principles that make sense of sentences are themselves entirely cold-blooded. Your eyes do not well with tears as you unconsciously determine the structural geometry of a verb phrase. By contrast — and Hauser wrestles with this throughout Moral Minds — even those who accept that some moral capacity is innate often see it as inextricably linked to emotion. Perhaps the universal core of morality is a set of emotional responses — disgust, shame, sympathy, guilt and so on — that are triggered by certain situations. This hypothesis is supported by clear demonstrations that, at least in some circumstances, emotion precedes intuition. ...A different concern is that languages are combinatorial symbolic systems. An English speaker, for example, knows perhaps hundreds of thousands of words, and also knows principles of syntax that dictate how these words combine with one another to form sentences. There are other combinatorial systems in human cognition, such as number and music, but it's not clear that morality is one of them. Even if it is distinct from emotion, moral knowledge might be better characterized as a small list of evolved rules, perhaps simple (such as a default prohibition against intentional harm), perhaps complex (such as some version of the doctrine of double effect), but still very different in character from linguistic knowledge."
Countering these reservations, from Wade's article: "Much of the present evidence for the moral grammar is indirect. Some of it comes from psychological tests of children, showing that they have an innate sense of fairness that starts to unfold at age 4. Some comes from ingenious dilemmas devised to show a subconscious moral judgment generator at work. These are known by the moral philosophers who developed them as “trolley problems." ...Suppose you are standing by a railroad track. Ahead, in a deep cutting from which no escape is possible, five people are walking on the track. You hear a train approaching. Beside you is a lever with which you can switch the train to a sidetrack. One person is walking on the sidetrack. Is it O.K. to pull the lever and save the five people, though one will die?...Most people say it is....Assume now you are on a bridge overlooking the track. Ahead, five people on the track are at risk. You can save them by throwing down a heavy object into the path of the approaching train. One is available beside you, in the form of a fat man. Is it O.K. to push him to save the five? Most people say no, although lives saved and lost are the same as in the first problem." (Note: Brain imagining experiments show the second scenario activates emotional areas of the brain that counter the more frontal rational areas engaged by the first scenario.).. "Why does the moral grammar generate such different judgments in apparently similar situations? It makes a distinction, Dr. Hauser writes, between a foreseen harm (the train killing the person on the track) and an intended harm (throwing the person in front of the train), despite the fact that the consequences are the same in either case. It also rates killing an animal as more acceptable than killing a person... Many people cannot articulate the foreseen/intended distinction, Dr. Hauser says, a sign that it is being made at inaccessible levels of the mind. This inability challenges the general belief that moral behavior is learned. For if people cannot articulate the foreseen/intended distinction, how can they teach it?"
This last point seems weak, all kinds of teaching occurs without articulation. Even given that some moral judgements can be more rapid than conscious thought, or are carried out by unconscious background processes, how do we design experiments to distinguish whether they are innate or learned? We need a paradigm as powerful that provided by the Nicaraguan school for deaf children, where the children invented among themselves a unique sign language following Chomsky's rules for a universal grammar. Would a group of children isolated from outside moral influence develop universal moral codes (an experiement that can't be done) or would we have the scenario of Golding's "The Lord of the Flies?"