Over the past 25 years, a growing number of cognitive scientists have taken it as their mission to find an empirical basis within brain science for the distinctive character of moral judgments. Investigators such as Marc Hauser, Steven Pinker, and Jonathan Haidt have posited the existence of an innate, domain-specific moral faculty in humans, be it a “universal moral grammar,” a “moral instinct,” or an “intuitive ethics.”
In Morality for Humans, Mark Johnson introduces an approach he calls “moral naturalism.” It is committed to the idea that moral knowledge does not exist on some separate plane but rather in the everyday habits, practices, institutions, and “bio-regulation” of lived organisms. Johnson is skeptical, however, of claims about the existence of a moral module in the human brain. He notes that “there are simply far too many complexly interacting multifunctional systems … in our intuitive moral judgments for there to be anything remotely resembling a distinct moral faculty.” Although he never uses the term, Johnson argues that moral problem-solving relies entirely on cognitive processes like logic, empathy, or narration. He sees the idea of an innate moral faculty as just another attempt to prove the existence of immutable moral laws, not in divine will or in pure reason but in a strong normative reading of cognitive science and evolutionary biology.
Johnson locates the source of values in our social and biological needs, cultural representations, and personal experiences. Here, Johnson has no qualms about violating the so-called naturalistic fallacy, which suggests that normative statements about how things ought to be cannot be derived from factual statements about what is. He moves freely between descriptions of human needs and human tendencies, on the one hand, and the normative suggestion that we ought to fulfill those needs and support those tendencies. Dismissing the naturalistic fallacy is easy if one thinks of it as an esoteric philosophical concept. But the term refers to a real logical problem of which Johnson is in fact acutely aware: “the fact that we have come to value certain states of affairs,” he writes, “is no guarantee that we should value them in the way we do.” This has been a particular weakness of studies that would make normative claims based on findings in cognitive neuroscience. How can descriptions of how our brains work tell us anything about what we ought to do in particular situations? It is a problem Johnson never resolves.
Morality is typically distinguished from other domains of social judgment by its unconditionality. A moral judgment refers to something that is considered right or wrong in and of itself. Johnson rejects the idea that moral judgments are unconditional, saying instead that the “trumping force” of morality owes to the fact that “certain things tend to matter more for us because they are thought to be necessary for the well-being of ourselves and others.” Individual well-being and societal cohesion are practical ends, however, and concerns about achieving them are matters of prudent conduct and prudent governance. This, along with Johnson's repeated insistence that moral problem-solving is no different in kind from any other form of problem-solving, leads one to wonder why he bothers to retain the concept of “morality” at all. Johnson suggests that values exist only in relation to some predefined or agreed-upon set of goods, feelings, or human needs, but that still creates fertile ground for hypothetical imperatives that are binding upon anyone who accepts the most basic premises of society. Why is this not enough? Is the stigma of moral relativism so frightening? What's so bad about prudence?