Showing posts with label evolutionary psychology. Show all posts
Showing posts with label evolutionary psychology. Show all posts

Saturday, December 07, 2024

Battle of the Bots - AI gets creative with Brain Hacks

This post is meant to archive my recent exercise of asking  ChatGPT 4o, Perplexity, Gemini+ advanced,  and Anthropic  (which does not offer links to its responses) to expand my  MindBlog's Brain Hacks post into a more complete essay that offers more background and context.  They responded with slightly different styles and impressive results that required tweaking for only a few small errors.  I thought my more concise text was more inviting, and didn't see anything really new, so my next prompt  (which I had asked ChatGPT to edit and improve) said:

"Your responses are well-constructed, but I’m seeking a deeper layer of insight—something that might challenge conventional perspectives or unveil an overlooked nuance. Can you reframe these ideas or introduce a novel approach that stretches beyond familiar interpretations??"

Here things got interesting.  Click the links above and scroll down to the responses to this final prompt to see the results.  The links most likely are maintained as long as I maintain paying accounts to the bots. (Anthropic does not offer links to its exchanges.)

I pass on here what I think is the most original and striking response, from perplexity. It was right on target,  brimming with new perspectives (some of them the subject of previous posts on my MindBlog), and links to a treasure trove of articles and ideas I was not familiar with from seven sources it provided links to.   Its response also provided a potpourri  or more or less relevant graphic images of brains and models.  

Here is the perplexities text, with links to a few of the seven sources it cites, all of which you can obtain directly from perplexity.:

Reframing our understanding of introspective awareness and cognitive modulation requires us to challenge some fundamental assumptions about the nature of consciousness and self-regulation. Let's explore a more provocative perspective that integrates recent neuroscientific findings with evolutionary theory and cognitive science.

The Illusion of Conscious Control

Contrary to the intuitive notion that our conscious mind is the primary driver of behavior and emotion regulation, emerging evidence suggests that consciousness might be more of an observer than a controller. The brain's emotional and regulatory processes often occur before we become consciously aware of them, suggesting that our sense of volitional control might be, to some extent, an illusion.

Consider the possibility that what we perceive as "introspective awareness" is actually a post-hoc narrative constructed by our brain to make sense of unconscious processes. This perspective challenges the traditional view of top-down emotion regulation and suggests a more complex interplay between conscious and unconscious processes.

The Social Evolution of Consciousness

Recent theories propose that human consciousness evolved primarily for social survival rather than individual cognitive enhancement [ref]. This social evolutionary framework suggests that subjective awareness might lack independent causal influence over psychological processes or actions. Instead, it may serve as a communication tool for social interaction and collective decision-making.

This perspective reframes our understanding of self-awareness and emotion regulation. Rather than viewing these abilities as individual cognitive skills, we might consider them as evolved social tools that facilitate group cohesion and cooperation. The ability to regulate emotions, in this context, becomes less about personal control and more about social signaling and coordination.

Neurotransmitter Systems and Emotion Regulation

Recent research has begun to map the overlap between emotion regulation brain networks and specific neurotransmitter systems [ref] [ref]. This approach offers a novel way to understand the biochemical basis of self-regulation. Instead of focusing solely on cognitive strategies or neural networks, we might conceptualize emotion regulation as a dynamic interplay between neurotransmitter systems.

This biochemical perspective opens up new avenues for
understanding individual differences in emotion regulation abilities. It suggests that what we perceive as "willpower" or "self-control" might be more accurately described as the efficient functioning of specific neurotransmitter systems.

The Myth of the Unified Self

Traditional models of self-awareness and emotion regulation often assume a unified, coherent self that observes and controls mental processes. However, neuroscientific evidence increasingly points to a more fragmented view of consciousness and self-hood.

Instead of a single, unified self, we might conceptualize the mind as a coalition of semi-independent processes, each with its own goals and methods of regulation. This "multiple selves" model challenges our intuitive understanding of self-awareness and suggests that emotion regulation might be more about negotiation between different aspects of our psyche than about top-down control.

Conclusion: Towards a New Paradigm

By integrating these perspectives, we can envision a new paradigm for understanding introspective awareness and cognitive modulation. This model would:

1. Recognize the limited role of conscious volition in emotion regulation
2. Emphasize the social and communicative functions of self-awareness
3. Incorporate the biochemical basis of regulatory processes
4. Embrace a more fragmented view of consciousness and selfhood

This approach challenges us to rethink fundamental concepts like free will, personal responsibility, and the nature of the self. It suggests that effective emotion regulation might be less about strengthening willpower and more about understanding and working with the complex, often unconscious processes that shape our emotional lives. By adopting this more nuanced and provocative perspective, we open up new avenues for research and potentially more effective interventions for emotional and cognitive well-being.



Monday, August 12, 2024

Human Distinctiveness in Different Cultures

I want to pass on a clip of text from Samuel Arbesman's recent Substack email, on the path dependence of fundamental ideas about ourselves. I suggest checking out the links to his related writing on human distinctiveness:


Awhile back I wrote about AI and human distinctiveness: basically my argument was that we should be less concerned by whether or not AI can do we what we can and care more about what we want to be doing. In other words, focus on what is quintessentially human, rather than what is uniquely human.

But perhaps some of these concerns are simply Western preoccupations, rather than universal human concerns?

In the recent book Fluke (which is fantastic!), Brian Klaas noted the following provocative point about differences between Western and Eastern thinking—and their views on human distinctness—and how it might have been due to the ecological milieu that each one arose from:

In this vision of a world humans are distinct from the rest of the natural world. That felt true for the inhabitants of the Middle East and Europe around the time of the birth of Christianity. Camels, cows, goats, mice, and dogs composed much of the encountered animal kingdom, a living menagerie of the beings that are quite unlike us.

In many Eastern cultures, by contrast, ancient religions tended to emphasize our unity with the natural world. One theory suggests that was partly because people lived among monkeys and apes. We recognized ourselves in them. As the biologist Roland Ennos points out, the word orangutan even means “man of the forest.” Hinduism has Hanumen, a monkey god. In China, the Chu kingdom revered gibbons. In these familiar primates, the theory suggests, it became impossible to ignore that we were part of nature—and nature was part of us.

This is almost a Guns, Germs, and Steel-kind of approach, but for ideas. At the risk of creating too much determinism here¹, it’s intriguing to explore the path dependence of ideas and concepts that organize how we think about the world and ourselves.

This reminds me of other research that examined how small historical distinctions can still affect our modern world, even if they are no longer relevant. For example, there is research that looks at how certain locations betray their histories as portage sites—places where boats or cargo were transported over land, allowing travel between more traversable waterways—despite this being obsolete. And yet it still has a certain long-term effect, as per this paper “Portage and Path Dependence”:


An external file that holds a picture, illustration, etc.
Object name is nihms-426698-f0002.jpg

And returning to ideas, there is a paper entitled “Frontier Culture: The Roots and Persistence of ‘Rugged Individualism’ in the United States” that explores whether or not certain differences in location—areas considered the “frontier”—affect the geographical variation of ideas and beliefs in the United States. 

In the end, simply being more aware of the ideas and history that suffuse our thinking—rather than taking them for granted—is something important, whether or not we are trying to understand humanity’s place in the world, how technology should impact humanity, or why cities are located where they are.





Wednesday, July 03, 2024

Cumulative human culture began ~600,000 years ago, during the Middle Pleistocene

An interesting study by Paige and Perreault:  

Significance

Our species, Homo sapiens, occupies a uniquely diverse set of ecological habitats. Humans expanded into tropical forests and arctic tundra through cumulative culture. Cumulative culture is the accumulation of modifications, innovations, and improvements over generations through social learning. Generations of variant accumulations allow humans to use technologies and know-how well beyond what a single naive individual could invent independently within their lifetime. We analyzed the stone tools made during the last 3.3 My. We found that these stone tools remained simple until about 600,000 B.P. After that point, stone tools rapidly increased in complexity. Consistent with findings from other research teams, we suggest that this transition signals the development of cumulative culture in the human lineage.
Abstract
Cumulative culture, the accumulation of modifications, innovations, and improvements over generations through social learning, is a key determinant of the behavioral diversity across Homo sapiens populations and their ability to adapt to varied ecological habitats. Generations of improvements, modifications, and lucky errors allow humans to use technologies and know-how well beyond what a single naive individual could invent independently within their lifetime. The human dependence on cumulative culture may have shaped the evolution of biological and behavioral traits in the hominin lineage, including brain size, body size, life history, sociality, subsistence, and ecological niche expansion. Yet, we do not know when, in the human career, our ancestors began to depend on cumulative culture. Here, we show that hominins likely relied on a derived form of cumulative culture by at least ~600 kya, a result in line with a growing body of existing evidence. We analyzed the complexity of stone tool manufacturing sequences over the last 3.3 My of the archaeological record. We then compare these to the achievable complexity without cumulative culture, which we estimate using nonhuman primate technologies and stone tool manufacturing experiments. We find that archaeological technologies become significantly more complex than expected in the absence of cumulative culture only after ~600 kya.

Friday, May 10, 2024

Blueprint - Nicholas Christakis on the evolutionary origins of a good society

This opinion piece by Frank Bruni in the NYTimes motivated me to download and read Nicholas Christakis' Magnum Opus “Blueprint” (very much in the 'everything you need to know about humans' spirit of Sapolsky's "Behave" and Harari's "Sapiens," and "Homo Deus," and "21 Lessons," all books that I have made the subject of previous posts.). It echoes Pinker's emphasis on the more positive aspects of human nature and progress. It is a very engaging read, and not amenable to a simple summary, but here is a bit from his introduction:
How can people be so different from—even go to war with—one another and yet also be so similar? The fundamental reason is that we each carry within us an evolutionary blueprint for making a good society.
Genes do amazing things inside our bodies, but even more amazing to me is what they do outside of them. Genes affect not only the structure and function of our bodies; not only the structure and function of our minds and, hence, our behaviors; but also the structure and function of our societies. This is what we recognize when we look at people around the world. This is the source of our common humanity.
Natural selection has shaped our lives as social animals, guiding the evolution of what I call a “social suite” of features priming our capacity for love, friendship, cooperation, learning, and even our ability to recognize the uniqueness of other individuals. Despite all the trappings and artifacts of modern invention—our tools, agriculture, cities, nations—we carry within us innate proclivities that reflect our natural social state, a state that is, as it turns out, primarily good, practically and even morally. Humans can no more make a society that is inconsistent with these positive urges than ants can suddenly make beehives.
I believe that we come to this sort of goodness just as naturally as we come to our bloodier inclinations. We cannot help it. We feel great when we help others. Our good deeds are not just the products of Enlightenment values. They have a deeper and prehistoric origin. The ancient tendencies that form the social suite work together to bind communities, specify their boundaries, identify their members, and allow people to achieve individual and collective objectives while at the same time minimizing hatred and violence. For too long, in my opinion, the scientific community has been overly focused on the dark side of our biological heritage: our capacity for tribalism, violence, selfishness, and cruelty. The bright side has been denied the attention it deserves.
(The above is a repost of MindBlog's 6/3/19 post)

Friday, May 03, 2024

Can there be scientific criteria for what is moral behavior?

I want to pass on an essay by Sam Harris updating his 2011 book "The  Moral Landscape."   I'm doing this mainly so that I can look back to this post when I want to firm up recall of some of its points,  and I would recommend that readers interested in this area have a look.  I largely agree with his positions with respect to distinctively human minds and culture that: 

Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, of course, fully constrained by the laws of Nature (whatever those turn out to be). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

It is the case, however, that in the larger context of the evolution of life on this planet, the emergence of complicated life forms has depended on the cranking of a relentless Darwin machine generating ever more sophisticated forms of predator and prey, a process in which the 'well being' of individual prey organisms is not obviously enhanced by their being another organism's  food source.

Here is the essay:

In 2011 I published my third book, The Moral Landscape, which was an edited version of my doctoral dissertation, in which I argued that there are right and wrong answers to questions of human values, and that much of importance depends upon our admitting this and trying to work out how we can all make moral progress together.

The book was widely criticized, both for things I said in it and for things I hadn’t said. I did say a few things which needlessly provoked academic philosophers and graduate students. I wrote at one point, by way of explaining why I was dispensing with some of the terminology one might expect to encounter in any discussion about moral truth, that every mention of terms like “metaethics,” “deontology,” “noncognitivism,” “antirealism,” and the like, directly increases the amount of boredom in the universe. That’s still true, of course, but it pissed off a lot of academics. Worse, many people couldn’t get past the book’s subtitle—“How Science Can Determine Human Values”—because they have a far narrower conception of science than I do. Many people, including many scientists, seem pretty confused about the boundaries between science and other modes of thought, as I’ll discuss here.

Consider the concept of “objectivity,” which most people assume is central to science. It is central, but only in one sense of the term. As the philosopher John Searle once pointed out, we should distinguish between epistemological and ontological senses of “objectivity.” Of course, terms like “epistemological” and “ontological” also increase the amount of boredom in the universe, but I’m afraid they are indispensable.

Epistemology relates to the foundation of knowledge. How do we know what is true? In what sense can a statement be true? Ontology relates to questions about what exists. For instance, is there only one type of stuff in the universe? Are there only physical things, or are there really existent things which are not physical? For instance, do numbers exist beyond their physical representations, and if so, how?

Science is fully committed to epistemological objectivity—that is, to analyzing evidence and argument without subjective bias—but it is in no sense committed to ontological objectivity. It isn’t limited to studying “objects,” that is purely physical things and processes. We can study human subjectivity—the mind as experienced from the first-person point of view—objectively, that is, without bias and other sources of cognitive error. And, as I have argued elsewhere, meditation is a crucial tool for doing this. It is simply a fact that human beings can become much better observers of their direct experience—and becoming better actually makes a wider range of experience possible.

Morality is subjective in the ontological sense—it’s not to be found out there among the atoms. It rests on the reality of consciousness and the experiences of conscious beings. To say that morality is “subjective” is not to say that it isn’t real. We can make truth claims about it—that is, we can be epistemologically objective about it.

I hope that distinction is clear. To say that science is committed to epistemic objectivity, is to say that science depends on certain epistemic values—values like coherence, and simplicity, and elegance, and predictive power. If I told you that I had an extremely important scientific theory that was self-contradictory, and needlessly complex, and could not account for past data, and could make no predictions whatsoever, you would understand that I must be joking, or otherwise speaking nonsense. We cannot separate statements of scientific fact from the underlying epistemic values of science. These values are axiomatic, which is to say that science does not discover them, or even attempt to justify them. It just presupposes their validity. If you suspect that I have just called the traditional distinction between facts and values into question, you would be right—and this is a point to which I will return.

For those unfamiliar with The Moral Landscape, here is my argument in brief: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, of course, fully constrained by the laws of Nature (whatever those turn out to be). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

Many people worry that any aspect of human subjectivity or culture could fit in the space provided: after all, a preference for chocolate over vanilla ice cream is a natural phenomenon, as is a preference for the comic Bill Burr over Bob Hope. Are we to imagine that there are universal truths about ice cream and comedy that admit of scientific analysis? Well, in a certain sense, yes. Science could, in principle, account for why some of us prefer chocolate to vanilla, and why no one’s favorite flavor of ice cream is aluminum. Comedy must also be susceptible to this kind of study. There will be a fair amount of cultural and generational variation in what counts as funny, but there are basic principles of comedy—like the violation of expectations, the breaking of taboos, etc.—that could be universal. Amusement to the point of laughter is a specific state of the human nervous system that can be scientifically studied. Why do some people laugh more readily than others? What exactly happens when we “get” a joke? These are ultimately questions about the human mind and brain. There will be scientific facts to be known here, and any differences in taste among human beings must be attributable to other facts that fall within the purview of science. If we were ever to arrive at a complete understanding of the human mind, we would understand human preferences of all kinds. And we might even be able to change them.

However, epistemic and ethical values appear to reach deeper than mere matters of taste—beyond how people happen to think and behave to questions of how they should think and behave. And it is this notion of “should” that introduces a fair amount of confusion into any conversation about moral truth. I should note in passing, however, that I don’t think the distinction between ethics and something like taste is as clear or as categorical as we might think. For instance, if a preference for chocolate ice cream allowed for the most rewarding experience a human being could have, while a preference for vanilla did not, we would deem it morally important to help people overcome any defect in their sense of taste that caused them to prefer vanilla—in the same way that we currently treat people for curable forms of blindness. It seems to me that the boundary between mere aesthetics and moral imperative—the difference between not liking Matisse and not liking the Golden Rule—is more a matter of there being higher stakes, and consequences that reach into the lives of other people, than of there being distinct classes of facts regarding the nature of human experience. There is much more to be said on this point, of course, but I will pass it by for the time being.

In my view, morality must be viewed in the context of our growing scientific understanding of the mind. If there are truths to be known about the mind, there will be truths to be known about how minds flourish—that is, about well-being altogether; consequently, there will be truths to be known about right and wrong and good and evil.

Many critics of The Moral Landscape claimed that my reliance on the concept of “well-being” was arbitrary and philosophically indefensible. Who’s to say that well-being is important at all or that other things aren’t far more important? How, for instance, could you convince someone who does not value well-being that he should value it? And even if one could justify well-being as the true foundation for morality, many have argued that one would need a “metric” by which it could be measured—else there could be no such thing as moral truth in the scientific sense. There is an unnecessarily restrictive notion of science underlying this last claim—as though scientific truths only exist if we can have immediate and uncontroversial access to them in the lab.  A certain physicist, who will remain nameless, was in the habit of saying things like “I don’t know what a unit of well-being is,” as though he were regretfully delivering the killing blow to my thesis. I would venture that he doesn’t know what a unit of sadness is either—and units of joy, disgust, boredom, irony, envy, schadenfreude, or any other mental state worth studying won’t be forthcoming. If half of what many scientists say about the limits of science were true, the sciences of mind are not merely doomed, there would be no facts for them to understand in the first place.

Consider the possibility of a much, much saner world than the one we currently live in: Imagine that—due to remarkable breakthroughs in technology, economics, psychological science, and political skill—we created a genuine utopia on Earth. Needless to say, this wouldn’t be boring, because we will have wisely avoided all the boring utopias. Rather, we will have created a global civilization of astonishing creativity, security, and happiness.

However, let us also imagine that some people weren’t ready for this earthly paradise once it arrived. Some were psychopaths who, despite enjoying the general change in quality of life, were nevertheless eager to break into their neighbors’ homes and torture them, just for kicks. A few had preferences that were incompatible with the flourishing of whole societies: Try as he might, Kim Jong Un just couldn’t shake the feeling that his cognac didn’t taste as sweet without millions of people starving beyond his palace gates. Given our advances in science, however, we are now able to alter preferences of this kind. In fact, we can painlessly deliver a firmware update to everyone. Imagine that we do that, and now the entirety of the species is fit to live in a global civilization that is as safe, and as fun, and as interesting, and as creative, and as filled with love as it can be.

It seems to me that this scenario cuts through the worry that the concept of well-being might leave out something that is worth caring about: for if you care about something that is not compatible with a peak of human flourishing—given the requisite changes in your brain, you would recognize that you were wrong to care about this thing in the first place. Wrong in what sense? Wrong in the sense that you didn’t know what you were missing. This is the core of my argument: I am claiming that there must be frontiers of human well-being that await our discovery—and certain interests and preferences surely blind us to them. In this sense epistemic and ethical values are fully entangled. There are horizons to well-being past which we cannot see. There are possible experiences of beauty and creativity and compassion that we will never discover. I think these are object statements about the frontiers of subjective experience.

However, it is true that our general approach to morality does not require that we maximize global well-being. There is this tension, for instance, between what may be good for us, and what may be good for society. And much of ordinary morality is a matter of our grappling with this tension. We are selfish to one degree or another; we lack complete information about the consequences of our actions; and even where we possess sufficient information, our interests and preferences often lead us to ignore it. But our failures to be motivated to seek higher goods, or to motivate others to seek them, do not suggest that no higher goods exist.

In what sense can an action be morally good? And what does it mean to make a good action better? For instance, it seems good for me to buy my daughter a birthday present, all things considered, because this will make both of us happy. Few people would fault me for spending some of my time and money in this way. But what about all the little girls in the world who suffer terribly at this moment for want of resources? Here is where an ethicist like Peter Singer will pounce, arguing that there actually is something morally questionable—possibly even reprehensible—about my buying my daughter a birthday present, given my knowledge of how much good my time and money could do elsewhere. What should I do? Singer’s argument makes me uncomfortable, but only for a moment. It is simply a fact about me that the suffering of other little girls is often out of sight and out of mind—and my daughter’s birthday is no easier to ignore than an asteroid impact. Can I muster a philosophical defense of my narrow focus? Perhaps. It might be that Singer’s case leaves out some important details: what would happen if everyone in the developed world ceased to shop for birthday presents and all other luxuries? Might the best of human civilization just come crashing down upon the worst? How can we spread wealth to the developing world if we do not create vast wealth in the first place? These reflections, self-serving and otherwise, land me in a toy store, looking for something that isn’t pink.

So, yes, it is true that my thoughts about global well-being did not amount to much in this instance. And yet, most people wouldn’t judge me for it. But what if there were a way for me to buy my daughter a present and also cure another little girl of cancer at no extra cost? Wouldn’t this be better than just buying the original present? What if there was a button I could push near the cash register, that literally cured a distant little girl somewhere of cancer. Imagine if I declined the opportunity to push this button saying, “What is that to me? I don’t care about other little girls and their cancers.” That, of course, would be monstrous. And it is only against an implicit notion of global well-being that we can judge my behavior to be less good than it might otherwise be. It is true that no one currently demands that I spend my time seeking, in every instance, to maximize global well-being—nor do I demand it of myself—but if global well-being could be maximized, that would be much better (by the only definition of “better” that makes any sense). I believe that is an objectively true statement about subjective reality in this universe.

The fact that we might not be motivated by a moral truth doesn’t suggest that moral truths don’t exist. Some of this comes down to confusion over a prescriptive rather than descriptive conception of ethics. It’s the difference between “should” and “can.” Whatever our preferences and capacities are at present—regardless of our failures to persuade others, or ourselves, to change our behaviors—our beliefs about good and evil must still relate to what is ultimately possible for human beings. And we can’t think about this deeper reality by focusing on the narrow question of what a person “should” do in the gray areas of life where we spend so much of our time. It is, rather, the extremes of human experience that throw sufficient light by which we can see that we stand upon a moral landscape: For instance, are members of the Islamic State wrong about morality? Yes. Really wrong? Yes. Can we say so from the perspective of science? Yes. If we know anything at all about human well-being—and we do—we know that the Islamic State is not leading anyone, including themselves, toward a peak on the moral landscape. We know, to a moral certainty, that human life can be better than it is in a society where they routinely decapitate people for being too rational.

When I wrote The Moral Landscape, I didn’t appreciate how much of ethical philosophy was conflated with concerns about personal motivation and public persuasion. For instance, it is widely imagined that a belief that one act is truly better than another (that is, that moral truths exist) must entail a commitment to acting in the prescribed way (that is, motivation). It must also rest on reasons that can be effectively communicated to others (that is, persuasion). If, for instance, I believe that I would be a better person, and the world a marginally better place, if I were vegetarian (this is a possible moral truth)—then, many people expect, that I must be motivated to exclude meat from my diet and be able to persuade others do likewise. The fact that I’m not sufficiently motivated to do this suggests that my presumed knowledge of moral truth is specious—either no such truths exist, or I do not, in fact, know them. The common idea is that for a moral claim to be objectively true, it must compel a person to follow it. Real values must cause action—not contingently, in combination with other motives, but absolutely—and they in turn constitute rational reasons for such action. Otherwise, the philosopher David Hume would be right, and the only way for claims about moral truth to be effective is for them to be combined with some associated passion or desire. Reason alone is useless.

But this paints an unrealistic picture of the human mind. Let’s take a simpler case: Let’s say that I want to lose 10 pounds. As it happens, I do, and I have absolutely no doubt that losing 10 pounds is possible (this is a biological truth). I also know that I would be marginally happier for having lost those pounds (this is a psychological truth). I am also quite certain that I understand the process by which pounds can be lost. I need only eat less than I generally do, and persist until I have lost the weight (this is another biological truth). These beliefs are cognitively valid, in that they describe objective truths about my body and mind, and about how I would feel in a possible future. I am totally unconflicted in my desire to lose the weight, in that I absolutely want to lose it. Unfortunately, that is not all I want. I also want to eat ice cream, preferably once a day.

The fact that I am not sufficiently motivated to shun ice cream, says nothing at all about my unconflicted desire to be thinner or the accuracy of my understanding of how to lose weight. My desire for ice cream is an independent fact about me. And gratifying this desire has consequences.
The point, of course, is that we can know what is true, without any doubt, and yet our knowledge is not guaranteed to produce behavior that is aligned with that truth. Such failures of will do not suggest that the relevant truths are just fictions.

At this point, I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors:

The first project is to understand what people do in the name of “morality.” We can look at the world, witnessing all of the diverse behaviors, and cultural norms, and institutions, and morally salient emotions like empathy and disgust, and we can study how these things affect human communities, both now and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and in any present generation we can understand them in psychological and neurobiological terms. And we can call the fruits of this effort a “science of morality.” This would be a purely descriptive science of a sort that many scientists have begun to build.

And for most scientists, this descriptive project seems to exhaust all the legitimate points of contact between science and morality. But I think there are two other projects that we could concern ourselves with, which are arguably more important.

The second project would be to actually understand how good human life could be. This would require that we get clearer about what we mean, and should mean, by the terms like “right” and “wrong” and “good” and “evil”. We would need to understand how our moral intuitions relate to human experience altogether, and to use this new discipline to think more intelligently about how to maximize human well-being. Of course, philosophers may think that this begs some of the important questions, and I’ll get back to that. Again, what makes well-being important. But assuming for the moment that it is important, understanding how to maximize it is a distinct project. How good could human life be, and how do we get there? How do we avoid making the mistakes that would prevent us from getting there? What are the paths upward on the moral landscape?

The third project is a project of persuasion: How can we persuade all of the people who are committed to silly and harmful things in the name of “morality” to change their commitments and to lead better lives? I think that this third project is actually the most important project facing humanity at the moment. It subsumes everything else we could care about—from arresting climate change, to reducing the risk of nuclear war, to curing cancer, to saving the whales. Any effort that requires that we collectively get our priorities straight and marshal our time and resources would fall within the scope of this project. To build a viable global civilization we must begin to converge on the same economic, political, and environmental goals.

Obviously the project of moral persuasion is very difficult—but it strikes me as especially difficult if you can’t figure out in what sense anyone could ever be right or wrong about questions of human values. Understanding right and wrong in universal terms is Project Two, and that’s what I’m focused on.

There are impediments to thinking about Project Two: the main one being that most right-thinking, well-educated people—certainly most scientists and public intellectuals, and I suspect, most journalists—have been convinced that something in the last 200 years of our intellectual progress has made it impossible to actually speak about “moral truth.” Not because human experience is so difficult to study or the brain too complex, but because there is thought to be no intellectual basis from which to say that anyone is ever right or wrong about questions of value.

My aim in The Moral Landscape was to undermine this assumption. I think it is based on several fallacies and double standards and, frankly, on some bad philosophy. And apart from being just wrong, this view has terrible consequences.

In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said that it just couldn’t be done—for this would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.

It has long been obvious that we need to converge, as a global civilization, in our beliefs about how we should treat one another. For this, we need some universal conceptions of right and wrong. So in addition to just not being true, I think skepticism about moral truth actually has consequences that we really should worry about.

Definitions matter. And in science we are always in the business of making definitions that serve to constrain the path of any conversation. There is nothing about this process that condemns us to epistemological relativism or that nullifies truth claims. For instance, we define “physics” as, loosely speaking, our best effort to understand the behavior of matter and energy in the universe. The discipline is defined with respect to the goal of understanding how the physical world behaves.

Of course, anyone is free to define “physics” in some other way. A Creationist could say, “Well, that’s not my definition of physics. My physics is designed to match the Book of Genesis.” But we are free to respond to such a person by saying, “You really don’t belong at this conference. That’s not the ‘physics’ we are interested in.” Such a gesture of exclusion is legitimate and necessary. The fact that the discourse of physics is not sufficient to silence such a person, the fact that he cannot be brought into our conversation and subdued by its terms, does not undermine physics as a domain of objective truth.

And yet, on the topic of morality, we seem to think that the possibility of differing opinions puts the very reality of the subject matter in question. The fact that someone can come forward and say that his morality has nothing to do with human flourishing—that it depends upon following Sharia law, for instance—the very fact that such a position can be articulated, has caused people to think that there’s no such thing as moral truth. Morality must be a human invention—because look, that guy has a morality of his own. The Taliban don’t agree with us. Who are we to say they’re wrong? But this is just a fallacy.

We have an intuitive physics, but much of our intuitive physics is wrong with respect to the goal of understanding how matter and energy behave in this universe. I am saying that we also have an intuitive morality, and much of our intuitive morality may be wrong with respect to the goal of maximizing human flourishing—and with reference to the facts that govern the well-being of conscious creatures, generally.

I will now deal with the fundamental challenge to the thesis I put forward in The Moral Landscape, and argue, briefly, that the only sphere of legitimate moral concern is the well-being of conscious creatures. I’ll say a few words in defense of this assertion, but I think the idea that it even has to be defended is the product of several fallacies and double standards that we’re not noticing. And I’ll mention a few.

I am claiming that consciousness is the only context in which we can talk about morality and human values. Why is consciousness not an arbitrary starting point? Well, what’s the alternative? Just imagine someone coming forward claiming to have some other sources of value that have nothing to do with the actual or potential experience of conscious beings. Whatever these things are, they cannot affect the experience of anything in the universe, in this life or in any other.

If you put these imagined sources of value in a box, I think it is obvious that what you would have in that box would be—by definition—the least valuable things in the universe. They would be—again, by definition—things that cannot be cared about. If someone says they care about these things—things that have no actual or potential effect on the experience of anyone, anywhere, at any time—I have to say that I think he is only pretending to care about these things. In the same way that I would say that a person is only pretending to believe that 2 + 2 = 5. So I don’t think consciousness is an arbitrary starting point. When we’re talking about right and wrong, and good and evil, and about outcomes that matter, we are necessarily talking about actual or potential changes in conscious experience. This really is an axiom that I think has to be accepted, but unlike most axioms, I don’t understand how anyone can even imagine not accepting it.

I would further add that the concept of “well-being” captures every sort of change in conscious experience that we can care about. The challenge is to have a definition of well-being that is capacious enough to absorb everything we can, in fact, care about, now and in the future. This is why I tend not to call myself a “consequentialist” or a “utilitarian” in philosophy, because traditionally, these positions have been bounded in such a way as to make them seem very brittle and exclusive of other concerns.

Consider Peter Singer’s Shallow Pond problem: Let’s say you’re walking home and spot a young child drowning in a shallow pond. This is a clear opportunity to save a life, but unfortunately you are wearing expensive shoes that will be ruined if you wade into that muddy water. So you walk on and a child dies. Singer argues that this is analogous to what we do every time we delete a fundraising email from UNICEF or any other organization that presents a concrete opportunity to save a child’s life. And if you’re like most people, you are left feeling that the argument is very hard to find fault with, and yet you’re someone unmoved. Something feels left out. Hence, you get the sense that consequentialism can’t be the whole story. 

However, I think that’s just an incomplete tally of the consequences. We all know, for instance, that it would take a very different sort of person to walk past a child drowning in twelve inches of water, out of concern for getting his shoes dirty, than it takes to ignore an appeal from UNICEF. It says much more about you, if you can walk past a drowning child. And if we were all this sort of person, there would be terrible ramifications throughout our culture. Simply counting bodies doesn’t get at the differences between these two actions—or differentiate the levels of selfishness and callousness entailed by each. Of course, part of Singer’s project as a philosopher is to argue that the two cases aren’t as different as they appear—that is, he is saying that we should create a culture in which we begin to feel just as depraved not sending our disposable wealth to Africa as we would letting our neighbors starve. And there may be some truth to that. However, given that we don’t have that culture, and given the psychological significance of proximity, the two cases remain quite different in moral terms. It is entirely normal, and compatible with psychological health and goodness, to not respond to every appeal you receive from a charity that is saving lives. It isn’t normal, or sane, to decline to save a child who is dying right in front of you, just because you don’t want to get your shoes wet. In order to be really consequentialist in our ethics, therefore, the challenge is to get clear about what the actual consequences of an action are—out in the world and in the minds of all involved—and to understand what changes in human experience are possible, and about which changes actually matter. There is usually much to the story than just counting bodies.

In thinking about a universal framework for morality, I now think in terms of what I call a “moral landscape.” Now perhaps there is a place in hell for anyone who would repurpose a cliché in this way, but the phrase, “the moral landscape” actually captures what I’m after: I’m envisioning a space of peaks and valleys, where the peaks correspond to the heights of flourishing possible for any conscious system, and the valleys correspond to the deepest depths of misery.

To speak specifically of human beings for the moment: anything that can affect a change in human consciousness might lead to movement across the moral landscape. So changes to our genome, and changes to our economic systems—and changes occurring on any level in between that can affect human well-being for good or for ill—would translate into movements within this space of possible human experience.

A few interesting things drop out of this model: Clearly, it is possible, or even likely, that there are many (culturally and psychologically distinct) peaks on the moral landscape. Perhaps there is a way to maximize human flourishing in which we follow Peter Singer as far as we can go, and somehow train ourselves to be truly dispassionate toward friends and family, without weighting our children’s welfare more than the welfare of other children, and perhaps there’s another peak where we remain biased toward our own children, within certain limits, while correcting for this bias by creating a social system which is, in fact, fair and compassionate. Perhaps there are a thousand different ways to tune the variable of selfishness versus altruism to land us on a peak on the moral landscape.

However, there will be many more ways to not be on a peak. And it is clearly possible to be wrong about how to move from our present position to some higher spot on the landscape. This follows directly from the observation that whatever conscious experiences are possible for us are a product of the way the universe is. Our conscious experience arises out of the laws of nature, and the states of our brain, and our entanglement with the world. Therefore, there are right and wrong answers to the question of how to maximize human flourishing in any moment.

This becomes very easy to see when we imagine there being only two people on earth: we can call them Adam and Eve. Ask yourself, are there right and wrong answers to the question of how Adam and Eve might maximize their well-being? Clearly there are. Wrong answer number one: they can take turns smashing each other in the face with a large rock. This will not be a way of living their best possible lives.

Of course, there are zero sum games they could play. They could be psychopaths who might utterly fail to collaborate in positive-sum ways. But, clearly, the best responses to their circumstance will not be zero-sum. The prospects of their flourishing and finding deeper and more durable sources of satisfaction will only be revealed by some form of cooperation. And all the worries that people normally bring to these discussions—like deontological principles or a Rawlsian concern about fairness—can be considered in the context of our asking how Adam and Eve can navigate the space of possible experiences so as to find a genuine peak of human flourishing, regardless of whether it is the only peak. Once again, multiple, equivalent but incompatible peaks still allow us to say that there is a larger reality in which there are right and wrong answers to moral questions. For instance, there are many correct answers to the question, “What should a human being eat?” And yet there are an even greater number of wrong answers.

There might be many different, mutually incompatible, but nevertheless equivalently good lives that Adam and Eve could live. But the crucial point is that all of these lives really are better than the countless ways they could be miserable.

One thing we must not get confused about at this point is the difference between answers in practice and answers in principle. Needless to say, fully understanding the possible range of experiences available to Adam and Eve represents a fantastically complex problem. And it gets more complex when we add another 8 billion people to the experiment. But I would argue that it’s not a different problem; it just gets more complicated.

By analogy, consider economics: Is economics a practical science yet? It’s hard to know. Maybe economics will never get better than it is now. Perhaps we’ll be surprised every decade or so by something terrible that happens in the economy, and we’ll be forced to conclude that we’re blinded by the complexity of our situation. But to say that it is difficult or impossible to solve certain problems in practice does not suggest that there are no right and wrong answers to these problems in principle.

The complexity of economics would never tempt us to say that there are no right and wrong ways to design economic systems, or to respond to a financial crisis. Nobody will ever say that it’s a form of bigotry to criticize another country’s response to a banking failure. Just imagine how terrifying it would be if the smartest people around all more or less agreed that we had to be nonjudgmental about everyone’s view of economics and about every possible response to a looming recession. Imagine a 50 percent plunge, in a single day, in the value of the stock market—and then imagine the world’s leading economists telling us that there simply are no right answers, and therefore no wrong ones, in how to respond, because economics is just a cultural construct. What masochistic insanity would that be?

And yet that is largely where we stand as an intellectual community on the most important questions of right and wrong and good and evil. I don’t think you have enjoyed the life of the mind until you have witnessed a philosopher or scientist talking about the “contextual legitimacy” of the burka, or of female genital excision, or any of these other barbaric practices that we know cause needless human misery. We have convinced ourselves that somehow science is, by definition, a value-free space and that we can’t make value judgments about beliefs and practices that needlessly undermine our attempts to build sane and productive societies.

The truth is, science is not value-free. Good science is the product of our valuing evidence, and logical consistency, and parsimony, and other intellectual virtues. And if you don’t value those things, you can’t participate in a scientific conversation. I’m arguing that we need not worry about the people who don’t value human flourishing, or who say they don’t. We need not listen to people who come to the table saying, “You know, we want to cut the heads off blasphemers at half-time at our soccer games because we have a book dictated by the Creator of the universe which says we should.” In response, we are free to say, “You are just confused about everything. Your “physics” isn’t physics, and your “morality” isn’t morality.” These are equivalent moves, intellectually speaking. They are borne of the same entanglement with real facts about the way the universe is. In terms of morality, our conversation can proceed with reference to facts about the real experiences of conscious creatures. It seems to me to be just as legitimate, scientifically speaking, to define “morality” in terms of well-being as it is to define “physics” in terms of the behavior of matter and energy. But most scientists, even most of those engaged in the study of morality, don’t seem to realize this.

Most criticisms of The Moral Landscape seem to stumble over its subtitle, “How Science Can Determine Human Values,” and I admit that this wording has become an albatross. To my surprise, many people think about science primarily in terms of academic titles, and budgets, and university architecture, and not in terms of the logical and empirical intuitions that allow us to form justified beliefs about the world. The point of my book was not to argue that “science” bureaucratically construed can subsume all talk about morality. My purpose was to show that moral truths exist and that they must fall (in principle, if not in practice) within some (perhaps never to be complete) understanding of the way conscious minds arise in this universe. For practical reasons, it is often necessary to draw boundaries between academic disciplines, but physicists, chemists, biologists, and psychologists rely on the same processes of thought and observation that govern all our efforts to stay in touch with reality. As do most normal people simply operating by what we call common sense.

Let’s say you come home one day and find water pouring through the ceiling of your bedroom. Imagining that you have a gaping hole in your roof, you immediately call the man who installed it. The roofer asks, “Is it raining where you live?” This is a good question. In fact, it hasn’t rained for months. Is this roofer a scientist? Not technically, but he is thinking just like one. An empirical understanding of how water travels, and a little logic, just reveal that your roof is not the problem.

So now you call a plumber. Is a plumber a scientist? No more than a roofer is, but any competent plumber will generate hypotheses and test them—and his thinking will conform to the same principles of reasoning that every scientist uses. When he pressure tests a section of pipe, he is running an experiment. Would this experiment be more “scientific” if it were funded by the National Science Foundation? No. By contrast, when a world-famous geneticist like Francis Collins declares that the biblical God exists, and he installed immortal souls, free will, and morality in one species of primate, he is repudiating the core values and methods of science with every word. Drawing the line between science and non-science by reference to a person’s occupation is just too crude to be useful—but it is what many people seem to do.

I am, in essence, defending the unity of knowledge here—the idea that the boundaries between disciplines are mere conventions and that we inhabit a single sphere in which to form true (or truer) beliefs about the world. Strangely, this remains a controversial thesis, and it is often met with charges of “scientism.” Sometimes, the unity of knowledge is very easy to see: Is there really a boundary between the truths of physics and those of biology? No. And yet it is practical, and even necessary, to treat these disciplines separately most of the time. In this sense, the boundaries between disciplines are analogous to political borders drawn on maps. Is there really a difference between California and Arizona at their shared border? No, but we divide this stretch of desert as a matter of convention. However, once we begin talking about non-contiguous disciplines—physics and sociology, say—people worry that a single, consilient idea of truth can’t span the distance. Suddenly, the different colors on the map look hugely significant. But I’m convinced that this is an illusion.

My interest is in the nature of reality—what is actual and what is possible—not in how we organize our talk about it in our universities. There is nothing wrong with a mathematician opening a door in physics, a physicist making a breakthrough in neuroscience, a neuroscientist settling a debate in the philosophy of mind, a philosopher overturning our understanding of history, a historian transforming the field of anthropology, an anthropologist revolutionizing linguistics, or a linguist discovering something foundational about our mathematical intuitions. The circle is now complete, and it simply does not matter where these people keep their offices or which journals they publish in.

Many people worry that science cannot derive moral judgments solely from scientific descriptions of the world. But no branch of science can derive its judgments solely from scientific descriptions of the world. We have intuitions of truth and falsity, logical consistency, and causality that are foundational to our thinking about anything. Certain of these intuitions can be used to trump others: It seems rational to think, for instance, that our expectations of cause and effect could be routinely violated by reality at large. It is rational, even, to think that apes like ourselves may simply be unequipped to understand what is really going on in the universe. That is a perfectly cogent idea, even though it seems to make a mockery of most of our other cogent ideas. But the fact is that all forms of scientific inquiry pull themselves up by some intuitive bootstraps. The Logician Kurt Gödel proved this for arithmetic, and it seems intuitively obvious for other forms of reasoning too. At some point we have to simply step out of the darkness. I invite you to define the concept of “causality,” for instance,  in noncircular terms if you would test this claim. Some intuitions are truly basic to our thinking.

In The Moral Landscape, I argue that this need not embarrass us in the field of morality, and I claim, for instance, that the conviction that the worst possible misery for everyone is bad and should be avoided is among the most basic intuitions we can form about anything. The worst… possible… misery… for everyone… is bad. I claim that there is no place to stand where one can coherently doubt this statement. There is no place to stand where one can coherently wonder whether the worst possible misery for everyone is really bad. Of course, one can pretend to wonder this. Just as one can pretend to think thoughts like, “what if all squares are really round?” But if you make contact with the meanings of words—the worst, possible, misery, for everyone—you will see that if bad means anything, it applies here.

We have to start somewhere. The epistemic values of science are not “self-justifying”—we just can’t get completely free of them. We can bracket our ordinary notions of cause and effect, or logical consistency, in local cases, as we do in quantum mechanics, for instance, but these are cases in which we are then forced to admit that we don’t (yet) understand what is going on. Our knowledge of the world seems to require that it behave in certain ways (e.g., if A is bigger than B, and B is bigger than C, then A must be bigger than C). And when these fundamental principles appear to be violated, we are invariably confused. Science can’t use logic to validate logic, because that presupposes the value of logic from the start. Similarly, we can’t use evidence to justify valuing evidence. We simply do value logic and evidence, and we make no apologies for pulling ourselves up by our bootstraps in this way. Physics can’t justify the intellectual tools one needs to do physics. Does that make it unscientific? People who object to my claim that the well-being of conscious creatures is foundational, are holding this claim about moral truth to a standard of self-justification that no branch of science can meet.

Again, I admit that there may be something confusing about my use of the term “science”: I want it to mean, in its broadest sense, our best effort to understand reality at every level. Obviously, there is nothing wrong with using this term in a narrower way, to name a specialized form of any such effort. The problem, however, is that there is no telling where and how the pursuits of journalists, historians, and plumbers will become entangled with the work of official “scientists.” To cite an example I’ve used elsewhere: Was the Shroud of Turin a medieval forgery? For centuries, this was a question for historians to answer—until we developed the technique of radiocarbon dating. Now it is a question for chemistry. There are no real boundaries between our various modes of seeking to understand the world.

Most people who approach moral philosophy take for granted that the traditional categories of consequentialism, deontology, and virtue ethics are conceptually valid and worth maintaining. However, I believe that partitioning moral philosophy in this way begs the very question at issue—and this is another reason I tend not to identify myself as a “consequentialist.” Everyone knows—or thinks he knows—that consequentialism fails to capture much of what we value. But if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable (that is, it reliably had bad consequences), no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start. Ultimately, therefore, I think that consequences, whether real or imagined, are always the point.

It seems clear that a complete scientific understanding of all possible minds would yield a complete understanding of all the ways in which conscious beings can thrive or suffer in this universe. What would such an account leave out that we (or any other conscious being) could conceivably care about? Gender equality? Respect for authority? Courage? Intellectual honesty? Either these have consequences for the minds involved, or they have no consequences. To say that something matters, is to claim that it matters to someone, actually or potentially. This is a claim about consequences, spelled out as changes in conscious experience, now or in the future.

However, many people seem to believe that a person can coherently value something for reasons that have nothing to do with its actual or potential consequences. It is true that certain philosophers have claimed this. For instance, John Rawls said that he cared about fairness and justice independent of their effects on human life. But I just don’t find this claim psychologically credible or conceptually coherent. After all, concerns about fairness predate our humanity. Capuchin monkeys worry about fairness. While they are very happy to be given slices of cucumber, they get very angry if they see another monkey getting fed grapes at the same time. Do you think that these monkeys are worried about fairness as an abstract principle, or do you think they just don’t like the way it feels to be treated unfairly?

Traditional moral philosophy also tends to set arbitrary limits on what counts as a consequence. Imagine, for instance, that a reckless driver is about to run over a puppy, and I, at great risk to myself, kick the puppy out of the car’s path, thereby saving its life. The consequences of my actions seem unambiguously good, and I will be a hero to animal lovers everywhere. However, let’s say that I didn’t actually see the car approaching and simply kicked the puppy because I wanted to cause it pain. Are my actions still good? Students of philosophy have been led to imagine that scenarios of this kind pose serious challenges to consequentialism.

But why should we ignore the consequences of a person’s mental states? If I am the kind of man who prefers kicking puppies to petting them, I have a mind that will reliably produce negative experiences—for both myself and others. Whatever is bad about being an abuser of puppies can be explained in terms of the consequences of living as such a person in the world. Yes, being deranged, I might get a momentary thrill from being cruel to a defenseless animal, but at what cost? Do my kids love me? Am I even capable of loving them? What rewarding experiences in life am I missing? There are guaranteed to be some. Intentions matter because they color our minds in every moment. They also determine much of our behavior, and thereby affect the lives of other people. And these people respond to us in turn. As our minds are, so our lives (largely) become.

Of course, intentions aren’t the only things that matter, as we can readily see in this case. It is quite possible for a bad person to inadvertently do some good in the world. But I argue that the inner and outer consequences of our thoughts and actions account for everything of value here. If you disagree, the burden is on you to come up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences, whether in the world or in the minds of any conscious beings that could possibly be affected.

I also think the spuriousness of our traditional categories in moral philosophy can be seen in how we teach our children to be good. Why do we want them to be good in the first place? Well, at a minimum, we’d rather they not wind up murdered in a ditch. More generally, we want them to flourish—to live happy, creative lives—and to contribute meaningfully to the lives of others. All this entails talking about rules and norms (i.e., deontology), a person’s character (i.e., virtue ethics), and the good and bad consequences of certain actions (i.e., consequentialism). But it all reduces to a concern for the well-being of our children and of the people with whom they will interact. I don’t believe that any sane person is concerned with abstract principles and virtues—such as justice and loyalty—independent of the ways they affect the lives of real people.


So what do we really mean by words like “should” and “ought”? Ethics is prescriptive only because we tend to talk about it that way—and I believe this emphasis comes, in large part, from the stultifying influence of Abrahamic religion. We could just as well think about ethics descriptively. Certain experiences, relationships, social institutions, and technological developments are possible—and there are more or less direct ways to arrive at them. Therefore, we have a navigation problem. To say that we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. To say that “You shouldn’t lie” (a prescriptive statement) is synonymous with saying that “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (a descriptive statement). Saying that “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive). In my view, moralizing notions like “should” and “ought” are just ways of indicating that certain experiences and states of being are clearly better than others.

Strangely, the logical use of “ought” doesn’t confound us the moral one does. For instance, we could say that one “ought” to obey the law of the excluded middle, or that one ought to finish the equation 2+2= with a 4. We could speak prescriptively in this way about logic, but it would add nothing but confusion. And it would invite us to speculate on the apparent mystery that some people persist in making logical errors, all the while knowing that they shouldn’t do so. But, of course, this is no mystery at all.

There need be no imperative to be good—just as there’s no imperative to be smart or even sane. A person may be wrong about what’s good for him (and for everyone else), but he’s under no obligation to correct his error—any more than he is required to understand that Ï€ is the ratio of the circumference of a circle to its diameter. A person may be mistaken about how to get what he wants out of life, and he may want the wrong things (i.e., things that will reliably make him miserable), just as he may fail to form true/useful beliefs in any other area. I am simply arguing that we live in a universe in which certain conscious states are possible, that some are better than others, and that movement in this space will depend on the laws of nature. Many people think that I must add an extra term of obligation—a person should be committed to maximizing the well-being of all conscious creatures. And this is what they think makes morality totally different than every other way of thinking about the world. But I see no need for this.

Imagine that you could push a button that would make every person on earth a little more creative, compassionate, intelligent, and fulfilled—in such a way as to produce no negative effects, now or in the future. This would be “good” in the only moral sense of the word that I understand. However, to make this claim, one needs to acknowledge a larger space of possible experiences (e.g. a moral landscape). What does it mean to say that a person should push this button? It means that making this choice would do a lot of good in the world without doing any harm. And a person’s unwillingness to push the button would say something very unflattering about him. After all, what possible motive could a person have for declining to increase everyone’s well-being (including his own) at no cost? I think our notions of “should” and “ought” can be derived from these facts and others like them. Pushing the button simply is better for everyone involved. And this is a statement of objective fact about the subjectivity of conscious beings. What more do we need to motivate prescriptive judgments like “should” and “ought” in this case?

Following David Hume, many philosophers think that “should” and “ought” can only be derived from our existing desires and goals—otherwise, there simply isn’t any moral sense to be made of what “is.” But this skirts the essential point: Some people don’t know what they’re missing. Thus, their existing desires and goals are not necessarily a guide to the moral landscape. In fact, it is perfectly coherent to say that all of us live, to one or another degree, in ignorance of our deepest possible interests. I am sure that there are experiences and modes of living available to me that I really would value over others if I were only wise enough to value them. It is only by reference to this larger space of possible experiences that my current priorities can be right or wrong. And unless one were to posit, against all evidence, that every person’s peak on the landscape is idiosyncratic and zero-sum (i.e., my greatest happiness will be unique to me and will come at the expense of everyone else’s), the best possible world for me seems very likely to be (nearly) the best possible world for everyone else. After all, do you think you’d be better off in a world filled with happy, peaceful, creative people, or one in which you drank the tears of the damned?
 

Part of the resistance I’ve encountered to the views presented in The Moral Landscape comes from readers who appear to want an ethical standard that gives clear guidance in every situation and doesn’t require too much of them. People want it to be easy to be good—and they don’t want to think that they are not living as good a life as they could be. This is especially true when balancing one’s personal well-being against the well-being of society. But the truth is, most of us are profoundly selfish, and most people don’t want to be told that being selfish is wrong. As I tried to make clear in the book, I don’t think it is wrong, up to a point. I suspect that an exclusive focus on the welfare of the group is not the best way to build a civilization that could secure it. Some form of enlightened selfishness seems the most reasonable approach to me—in which we are more concerned about ourselves and our children than about other people and their children, but not callously so.  However, the well-being of the whole group is the only global standard by which we can judge specific outcomes to be good.

The question of how to think about collective well-being is difficult. However, I think the paradoxes that the philosopher Derek Parfit famously constructed here (e.g. “The Repugnant Conclusion”) are similar to Zeno’s paradoxes of motion. How do any of us get to the coffee pot in the morning if we must first travel half the distance to it, and then half again, ad infinitum? Apparently, this geometrical party trick enthralled philosophers for centuries—but I suspect that no one ever took Zeno so seriously as to doubt that motion was possible. Once mathematicians showed us how to sum an infinite series, the problem vanished. Whether or not we ever shake off Parfit’s paradoxes around population ethics, there is no question that the limit cases exist: The worst possible misery for everyone really is worse than the greatest possible happiness. Between these two poles, it seems to me, we can talk about moral truth without hedging. We are still faced with a very real and all-too-consequential navigation problem. Where to go from here? Some experiences are sublime, and others are truly terrible—and all await discovery by the requisite minds. Certain states of pointless misery are possible—how can we avoid them? As far as I can see, saying that we “should” avoid them adds nothing to the import of the phrase “pointless misery.” Is pointless misery a bad thing? Well if it isn’t bad, what is? Even if you want to dispense with words like “bad” and “good” and remain entirely nonjudgmental, countless states of suffering and well-being are there to be realized—and we are, at this very moment, moving toward some and away from others, whether we know it or not.

And if we are going to worry about how our provincial human purposes frame our thinking about reality, let’s worry about this consistently. Just as we can’t have a science of medicine without valuing health, I believe morality is also inconceivable without a concern for well-being and that wherever people talk about “good” and “evil” in ways that clearly have nothing to do with well-being they are misusing these terms. In fact, people have been confused about medicine, nutrition, exercise, and related topics for millennia. Even now, many of us harbor beliefs about human health that have nothing to do with biological reality. Is this diversity of opinion a sign that health truly falls outside the purview of science?

And if we are going to balk at axiomatically valuing health or well-being, why accept any values at all in our epistemology? For instance, how is a desire to understand the world any more defensible? I would argue that satisfying our curiosity is a component of our well-being, and when it isn’t—for instance, when certain forms of knowledge seem guaranteed to cause great harm—it is perfectly rational for us to decline to seek such knowledge. I’m not even sure that curiosity grounds most of our empirical truth-claims. Is my knowledge that fire is hot borne of curiosity, or of my memory of having once been burned and my inclination to avoid pain and injury in the future?
 

We have certain logical and moral intuitions that we cannot help but rely upon to understand and judge the desirability of various states of the world. The limitations of some of these intuitions can be transcended by recourse to others that seem more fundamental. In the end, however, we must work with intuitions that strike us as non-negotiable. To ask whether the moral landscape captures our sense of moral imperative is like asking whether the physical universe is logical. Does the physical universe capture our sense of logical imperative? The universe is whatever it is. To ask whether it is logical is simply to wonder whether we can understand it. Perhaps knowing all the laws of physics would leave us feeling that certain laws are contradictory. This wouldn’t be a problem with the universe; it would be a problem with human reasoning. Are there peaks of well-being that might strike us as morally objectionable? This wouldn’t be a problem with the moral landscape; it would be a problem with our moral cognition.

As I argue in The Moral Landscape, we may think merely about what is—specifically about the possibilities of experience in this universe—and realize that this set of facts captures all that can be valued, along with every form of consciousness that could possibly value it. Either a change in the universe can affect the experience of someone, somewhere, or it can’t. I claim that only those changes that can have such effects can be coherently cared about. And if there is a credible exception to this claim, I have yet to encounter it. There is only what IS (which includes all that is possible). If you can’t find your moral imperatives here, I can’t see any other place to look for them.

Monday, April 29, 2024

An expanded view of human minds and their reality.

I want to pass on this recent essay by Venkatesh Rao in its entirety, because it has changed my mind on agreeing with Daniel Dennett that the “Hard Problem” of consciousness is a fabrication that doesn’t actually exist. There are so many interesting ideas in this essay that I will be returning to it frequently in the future.  

We Are All Dennettians Now

An homage riff on AI+mind+evolution in honor of Daniel Dennett

The philosopher Daniel Dennett (1942-2024) died last week. Dennett’s contributions to the 1981 book he co-edited with Douglas Hofstadter, The Mind’s I,¹ which I read in 1996 (rather appropriately while doing an undergrad internship at the Center for AI and Robotics in Bangalore), helped shape a lot of my early philosophical development. A few years later (around 1999 I think), I closely read his trollishly titled 1991 magnum opus, Consciousness Explained (alongside Steven Pinker’s similar volume How the Mind Works), and that ended up shaping a lot of my development as an engineer. Consciousness Explained is effectively a detailed neuro-realistic speculative engineering model of the architecture of the brain in a pseudo-code like idiom. I stopped following his work closely at that point, since my tastes took me in other directions, but I did take care to keep him on my radar loosely.

So in his honor, I’d like to (rather chaotically) riff on the interplay of the three big topics that form the through-lines of his life and work: AI, the philosophy of mind, and Darwinism. Long before we all turned into philosophers of AI overnight with the launch of ChatGPT, he defined what that even means.

When I say Dennett’s views shaped mine, I don’t mean I necessarily agreed with them. Arguably, your early philosophical development is not shaped by discovering thinkers you agree with. That’s for later-life refinements (Hannah Arendt, whom I first read only a few years ago, is probably the most influential agree-with philosopher for me). Your early development is shaped by discovering philosophers you disagree with.

But any old disagreement will not shape your thinking. I read Ayn Rand too (if you want to generously call her a philosopher) around the same time I discovered Dennett, and while I disagreed with her too, she basically had no effect on my thinking. I found her work to be too puerile to argue with. But Dennett — disagreeing with him forced me to grow, because it took serious work over years to decades — some of it still ongoing — to figure out how and why I disagreed. It was philosophical weight training. The work of disagreeing with Dennett led me to other contemporary philosophers of mind like David Chalmers and Ned Block, and various other more esoteric bunnytrails. This was all a quarter century ago, but by the time I exited what I think of as the path-dependent phase of my philosophical development circa 2003, my thinking bore indelible imprints of Dennett’s influence.

I think Dennett was right about nearly all the details of everything he touched, and also right (and more crucially, tasteful) in his choices of details to focus on as being illuminating and significant. This is why he was able to provide elegant philosophical accounts of various kinds of phenomenology that elevated the corresponding discourses in AI, psychology, neuroscience, and biology. His work made him a sort of patron philosopher of a variety of youngish scientific disciplines that lacked robust philosophical traditions of their own. It also made him a vastly more relevant philosopher than most of his peers in the philosophy world, who tend, through some mix of insecurity, lack of courage, and illiteracy, to stay away from the dirty details of technological modernity in their philosophizing (and therefore cut rather sorry figures when they attempt to weigh in on philosophy-of-technology issues with cartoon thought experiments about trolleys or drowning children). Even the few who came close, like John Searle, could rarely match Dennett’s mastery of vast oceans of modern techno-phenomenological detail, even if they tended to do better with clever thought experiments. As far as I am aware, Dennett has no clever but misleading Chinese Rooms or Trolley Problems to his credit, which to my mind makes him a superior rather than inferior philosopher.

I suspect he paid a cost for his wide-ranging, ecumenical curiosities in his home discipline. Academic philosophers like to speak in a precise code about the simplest possible things, to say what they believe to be the most robust things they can. Dennett on the other hand talked in common language about the most complex things the human mind has ever attempted to grasp. The fact that he got his hands (and mind!) dirty with vast amounts of technical detail, and dealt in facts with short half-lives from fast-evolving fields, and wrote in a style accessible to any intelligent reader willing to pay attention, made him barely recognizable as a philosopher at all. But despite the cosmetic similarities, it would be a serious mistake to class him with science popularizers or TED/television scientists with a flair for spectacle at the expense of substance.

Though he had a habit of being uncannily right about a lot of the details, I believe Dennett was almost certainly wrong about a few critical fundamental things. We’ll get to what and why later, but the big point to acknowledge is that if he was indeed wrong (and to his credit, I am not yet 100% sure he was), he was wrong in ways that forced even his opponents to elevate their games. He was as much a patron philosopher (or troll or bugbear) to his philosophical rivals as to the scientists of the fields he adopted. You could not even be an opponent of Dennett except in Dennettian ways. To disagree with the premises of Strong AI or Dennett’s theory of mind is to disagree in Dennettian ways.

If I were to caricature how I fit in the Dennettian universe, I suspect I’d be closest to what he called a “mysterian” (though I don’t think the term originated with him). Despite mysterian being something of a dismissive slur, it does point squarely at the core of why his opponents disagree with him, and the parts of their philosophies they must work to harden and make rigorous, to withstand the acid forces of the peculiarly Dennetian mode of scrutiny I want to talk about here.

So to adapt the line used by Milton Friedman to describe Keynes: We are all Dennettians now.

Let’s try and unpack what that means.

Mysterianism

As I said, in Dennettian terms, I am a “mysterian.” At a big commitments level, mysterianism is the polar opposite of the position Dennett consistently argued across his work, a version of what we generally call a “Strong AI” position. But at the detailed level, there are no serious disagreements. Mysterians and Strong AI people agree about most of the details of how the mind works. They just put the overall picture together differently because mysterians want to accommodate certain currently mysterious things that Strong AI people typically reject as either meaningless noise or shallow confusions/illusions.

Dennett’s version of Strong AI was both more robustly constructed than the sophomoric versions one typically encounters, and more broadly applied: beyond AI to human brains and seemingly intelligent processes like evolution. Most importantly, it was actually interesting. Reading his accounts of minds and computers, you do not come away with the vague suspicion of a non-neurotypical succumbing to the typical-mind fallacy and describing the inner life of a robot or philosophical zombie as “truth.” From his writing, it sounds like he had a fairly typical inner-life experience, so why did he seem to deny the apparent ineffable essence of it? Why didn’t he try to eff that essence the way Chalmers, for instance, does? Why did he seemingly dismiss it as irrelevant, unreal, or both?

To be a mysterian in Dennettian terms is to take ineffable, vitalist essences seriously. With AIs and minds, it means taking the hard problem of consciousness seriously. With evolution, it means believing that Darwinism is not the whole story. Dennett tended to use the term as a dismissive slur, but many, (re)claim it as a term of approbation, and I count myself among them.

To be a rigorous mysterian, as opposed to one of the sillier sorts Dennett liked to stoop to conquer (naive dualists, intelligent-designers, theological literalists, overconfident mystics…), you have to take vitalist essences “seriously but not literally.” My version of doing that is to treat my vitalist inclinations as placeholder pointers to things that lurk in the dank, ungrokked margins of the thinkable, just beyond the reach of my conceptualizing mind. Things I suspect exist by the vague shapes of the barely sensed holes they leave in my ideas. In pursuit of such things, I happily traffic in literary probing of Labatutian/Lovecraftian/Ballardian varieties, self-consciously magical thinking, junk from various pre-paradigmatic alchemical thought spaces, constructs that uncannily resemble astrology, and so on. I suppose it’s a sort of intuitive-ironic cognitive kayfabe for the most part, but it’s not entirely so.

So for example, when I talk of elan vital, as I frequently do in this newsletter, I don’t mean to imply I believe in some sort of magical fluid flowing through living things or a Gaian planetary consciousness. Nor do I mean the sort of overwrought continental metaphysics of time and subjectivity associated with Henri Bergson (which made him the darling of modernist literary types and an object of contempt to Einstein). I simply mean I suspect there are invisible things going on in the experience and phenomenology of life that are currently beyond my ability to see, model, and talk about using recognizably rational concepts, and I’d rather talk about them as best I can with irrational concepts than pretend they don’t exist.

Or to take another example, when I say that “Darwin is not the whole story,” I don’t mean I believe in intelligent design or a creator god (I’m at least as strong an atheist as Dennett was). I mean that Darwinian principles of evolution constrain but do not determine the nature of nature, and we don’t yet fully grok what completes the picture except perhaps in hand-wavy magical-thinking ways. To fully determine what happens, you need to add more elements. For example, you can add ideas like those of Stuart Kauffman and other complexity theorists. You could add elements of what Maturana and Varela called autopoiesis. Or it might be none of these candidate hole-filling ideas, but something to be dreamt up years in the future. Or never. But just because there are only unsatisfactory candidate ways for talking about stuff doesn’t mean you should conclude the stuff doesn’t exist.

In all such cases, there are more things present in phenomenology I can access than I can talk about using terms of reference that would be considered legitimate by everybody. This is neither known-unknowns (which are holes with shapes defined by concepts that seem rational), nor unknown-unknown (which have not yet appeared in your senses, and therefore, to apply a Gilbert Ryle principle, cannot be in your mind).

These are things that we might call magically known. Like chemistry was magically known through alchemy. For phenomenology to be worth magically knowing, the way-of-knowing must offer interesting agency, even if it doesn’t hang together conceptually.

Dennett seemed to mostly fiercely resist and reject such impulses. He genuinely seemed to think that belief in (say) the hard problem of consciousness was some sort of semantic confusion. Unlike say B. F. Skinner, whom critics accused of only pretending to not believe in inner processes, Dennett seemed to actually disbelieve in them.

Dennett seemed to disregard a cousin to the principle that absence of evidence is not evidence of absence: Presence of magical conceptualizations does not mean absence of phenomenology. A bad pointer does not disprove the existence of what it points to. This sort of error is easy to avoid in most cases. Lightning is obviously real even if some people seem to account for it in terms of Indra wielding his vajra. But when we try to talk of things that are on the phenomenological margins, barely within the grasp of sensory awareness, or worse, potentially exist as incommunicable but universal subjective phenomenology (such as the experience of the color “blue”), things get tricky.

Dennett was a successor of sorts to philosophers like Gilbert Ryle, and psychologists like B. F. Skinner. In evolutionary philosophy, his thinking aligned with people like Richard Dawkins and Steven Pinker, and against Noam Chomsky (often classified as a mysterian, though I think the unreasonable effectiveness of LLMs kinda vindicates to a degree Chomsky’s notions of an ineffable more-than-Darwin essence around universal grammars that we don’t yet understand).

I personally find it interesting to poke at why Dennett took the positions he took, given that he was contemplating the same phenomenological data and low-to-mid-level conceptual categories as the rest of us (indeed, he supplied much of it for the rest of us). One way to get at it is to ask: Was Dennett a phenomenologist? Are the limits of his ideas are the limits of phenomenology?

I think the answers are yes and yes, but he wasn’t a traditional sort of phenomenologist, and he didn’t encounter the more familiar sorts of limits.

The Limits of Phenomenology

Let’s talk regular phenomenology first, before tackling what I think was Dennett’s version.

I think of phenomenology, as a working philosophical method, to be something like a conceited form of empiricism that aims to get away from any kind of conceptually mediated seeing.

When you begin to inquire into a complex question with any sort of fundamentally empirical approach, your philosophy can only be as good as a) the things you know now through your (potentially technologically augmented) senses and b) the ways in which you know those things.

The conceit of phenomenology begins with trying to “unknow” what is known to be known, and contemplate the resulting presumed “pure” experiences “directly.” There are various flavors of this: Husserlian bracketing in the Western tradition, Zen-like “beginner mind” practices, Vipassana style recursive examination of mental experiences, and so on. Some flavors apply only to sense-observations of external phenomena, others apply only to subjective introspection, and some apply to both. Given the current somewhat faddish uptick in Eastern-flavored disciplines of interiority, it is important to take note of the fact that the phenomenological attitude is not necessarily inward-oriented. For example, the 19th century quest to measure a tenth of a second, and factor out the “human equation” in astronomical observations, was a massive project in Western phenomenology. The abstract thought experiments with notional clocks in the theory of relativity began with the phenomenology of real clocks.

In “doing” phenomenology, you are assuming that you know what you know relatively completely (or can come to know it), and have a reliable procedure for either unknowing it, or systematically alloying it with skeptical doubt, to destabilize unreliable perceptions it might be contributing to. Such destabilizability of your default, familiar way of knowing, in pursuit of a more-perfect unknowing, is in many ways the essence of rationality and objectivity. It is the (usually undeclared) starting posture for doing “science,” among other things.

Crucially, for our purposes in this essay, you do not make a careful distinction between things you know in a rational way and things you know in a magical or mysterian way, but effectively assume that only the former matter; that the latter can be trivially brushed aside as noise signifying nothing that needs unknowing. I think the reverse is true. It is harder, to the point of near impossible, to root out magical ideas from your perceptions, and they signify the most important things you know. More to the point, it is not clear that trying to unknow things, especially magical things, is in fact a good idea, or that unknowing is clarifying rather than blinding. But phenomenology is committed to trying. This has consequences for “phenomenological projects” of any sort, be they Husserlian or Theravadan in spirit.

A relatively crude example: “life” becomes much less ineffable (and depending on your standards, possibly entirely drained of mystery) once you view it through the lens of DNA. Not only do you see new things through new tools, you see phenomenology you could already see, such as Mendelian inheritance, in a fundamentally different way that feels phenomenologically “deeper” when in fact it relies on more conceptual scaffolding, more things that are invisible to most of us, and requires instruments with elaborate theories attached to even render intelligible. You do not see “ATCG” sequences when contemplating a pea flower. You could retreat up the toolchain and turn your attention to how instruments construct the “idea of DNA” but to me that feels like a usually futile yak shave. The better thing to do is ask why a more indirect way of knowing somehow seems to perceive more clearly than more direct ways.

It is obviously hard to “unsee” knowledge of DNA today when contemplating the nature of life. But it would have been even harder to recognize that something “DNA shaped” was missing in say 1850, regardless of your phenomenological skills, by unknowing things you knew then. In fact, clearing away magical ways of knowing might have swept away critical clues.

To become aware, as Mendel did, that there was a hidden order to inheritance in pea flowers, takes a leap of imagination that cannot be purely phenomenological. To suspect in 1943, as Schrodinger did, the existence of “aperiodic covalent bonded crystals” at the root of life, and point the way to DNA, takes a blend of seeing and knowing that is greater than either. Magical knowing is pathfinder-knowing that connects what we know and can see to what we could know and see. It is the bootstrapping process of the mind.

Mendel and Schrodinger “saw” DNA before it was discovered, in terms of reference that would have been considered “rational” in their own time, but this has not always been the case. Newton, famously, had a lot of magical thinking going on in his successful quest for a theory of gravity. Kepler was a numerologist. Leibniz was ball of mad ideas. One of Newton’s successful bits of thinking, the idea of “particles” of light, which faced off against Huygens’ “waves,” has still not exited the magical realm. The jury is still out in our time about whether quantized fields are phenomenologically “real” or merely a convenient mnemonic-metaphoric motif for some unexpected structure in some unreasonably effective math.

Arguably, none of these thinkers was a phenomenologist, though all had a disciplined empirical streak in their thinking. The history of their ideas suggests that phenomenology is no panacea for philosophical troubles with unruly conceptual universes that refuse to be meekly and rationally “bracketed” away. There is no systematic and magic-free way to march from current truths to better ones via phenomenological disciplines.

The fatal conceit of naive phenomenology (which Paul Feyerabend spotted) is the idea that there is privileged reliable (or meta-reliable) “technique” of relating to your sense experiences, independent of the concepts you hold, whether that “technique” is Husserlian bracketing or vipassana. Understood this way, theories of reality are not that different from physical instruments that extend our senses. Experiment and theory don’t always expose each other’s weaknesses. Sometimes they mutually reinforce them.

In fact, I would go so far as to suggest—and I suspect Dennett would have agreed—that there is no such thing as phenomenology per se. All we ever see is the most invisible of our theories (rational and magical), projected via our senses and instruments (which shape, and are shaped by, those theories), onto the seemingly underdetermined aspects of the universe. There are only incomplete ways of knowing and seeing within which ideas and experiences are inextricably intertwined. No phenomenological method can consistently outperform methodological anarchy.

To deny this is to be a traditional phenomenologist, striving to procedurally separate the realm of ideas and concepts from the realm of putatively unfactored and “directly perceived” (a favorite phrase of meditators) “real” experiences.

Husserlian bracketing — “suspending trust in the objectivity of the world” — is fine in theory, but not so easy in practice. How do you know that you’re setting aside preconceived notions, judgments, and biases and attending to a phenomenon as it truly is? How do you set aside the unconscious “theory” that the Sun revolves around the Earth, and open your mind to the possibility that it’s the other way around? How do you “see” DNA-shaped holes in current ways of seeing, especially if they currently manifest as strange demons that you might sweep away in a spasm of over-eager epistemic hygiene? How do you relate, as a phenomenologist, to intrinsically conceptual things like electrons and positrons that only exist behind layers of mathematics describing experimental data processed through layers of instrumentation conceived by existing theories? If you can’t check the math yourself, how can you trust that the light bulb turning on is powered by those “electrons” tracing arcs through cloud chambers?

In practice, we know how such shifts actually came about. Not because philosophers meditated dispassionately on the “phenomenology” with free minds seeing reality as it “truly is,” but because astronomers and biologists with heads full of weird magical notions looked through telescopes and microscopes, maintained careful notes of detailed measurements, informed by those weird magical theories, and tried to account for discrepancies. Tycho Brahe, for instance, who provided the data that dethroned Ptolemy, believed in some sort of Frankenstein helio-geo-centric Ptolemy++ theory. Instead of explaining the discrepancies, as Kepler did later, Brahe attempted to explain them away using terms of reference he was attached to. He failed to resolve the tension. But he paved the way to Kepler resolving that particular tension (who of course introduced new ones, while lost in his own magical thinking about platonic solids). Formally phenomenological postures were not just absent from the story, but would have arguably retarded it by being too methodologically conservative.

Phenomenology, in other words, is something of a procedural conceit. An uncritical trust in self-certifying ways of seeing based entirely on how compelling they seem to the seer. The self-certification follows some sort of seemingly rational procedure (which might be mystical but still rational in the sense of being coherent and disciplined and internally consistent) but ultimately derives its authority from the intuitive certainties and suspicions of the perceiving subject. Phenomenological procedures are a kind of rule-by-law for governing sense experience in a laissez-faire way, rather than the “objective” rule-of-law they are often presented as. Phenomenology is to empiricism as “socialism with Chinese characteristics” is to liberal democracy.

This is not to say phenomenology is hopelessly unreliable or useless. All methodologies have their conceits, which manifest as blindspots. With phenomenology, the blindspot manifests as an insistence on non-magicality. The phenomenologist fiercely rejects the Cartesian theater and the varied ghosts-in-machines that dance there. The meditator insists he is “directly perceiving” reality in a reproducible way, no magic necessary. I do not doubt that these convictions are utterly compelling to those who hold them; as compelling as the incommunicable reality of perceiving “blue” is to everybody. I have no particular argument with such insistence. What I actually have a problem with is the delegitimization of magical thinking in the process, which I suspect to be essential for progress.

My own solution is to simply add magical thinking back into the picture for my own use, without attempting to defend that choice, and accepting the consequences.

For example, I take Myers-Briggs and the Enneagram seriously (but not literally!). I believe in the hard problem of consciousness, and therefore think “upload” and “simulationism” ideas are not-even-wrong. I don’t believe in Gods or AGIs, and therefore don’t see the point of Pascal’s wager type contortions to avoid heaven/hell or future-simulated-torture scenarios. In each case my commitments rely on chains of thought that are at least partly magical thinking, and decidedly non-phenomenological, which has various social consequences in various places. I don’t attempt to justify any of it because I think all schemes of justification, whether they are labeled “science” or something else, rest on traditional phenomenology and its limits.

Does this mean solipsism is the best we can hope for? This is where we get back to Dennett.

Dennett, to his credit, I don’t think he was a traditional phenomenologist, and he mostly avoided all the traps I’ve pointed out, including the trap of solipsism. Nor was he what one might call a “phenomenologist of language” like most modern analytical philosophers in the West. He was much too interested in technological modernity (and the limits of thought it has been exposing for a century) to be content with such a shrinking, traditionalist philosophical range.

But he was a phenomenologist in the broader sense of rejecting the possible reality of things that currently lack coherent non-magical modes of apprehension.

So how did he operate if not in traditional phenomenological ways?

Demiurge Phenomenology

I believe Dennett was what we might call a demiurge phenomenologist, which is a sort of late modernist version of traditional phenomenology. It will take a bit of work to explain what I mean by that.

I can’t recall if he ever said something like this (I’m definitely not a completist with his work and have only read a fraction of his voluminous output), but I suspect Dennett believed that the human experience of “mind” is itself subject to evolutionary processes (think Jaynes and bicameral mind theories for example — I seem to recall him saying something approving about that in an interview somewhere). He sought to construct philosophy in ways that did not derive authority from an absolute notion of the experience of mind. He tried to do relativity theory for minds, but without descending into solipsism.

It is easiest to appreciate this point by starting with body experience. For example, we are evolved from creatures with tails, but we do not currently possess tails. We possess vestigial “tail bones” and presumably bits of DNA relevant to tails, but we cannot know what it is like to have a tail (or in the spirit of mysterian philosopher Thomas Nagel’s What is it Like to Be a Bat provocation, which I first read in The Mind’s I, what it is like for a tailed creature to have a tail).

We do catch tantalizing Lovecraftian-Ballardian glimpses of our genetic heritage though. For example, the gasping reflex and shot of alertness that accompanies being dunked in water (the mammalian dive reflex) is a remnant of a more aquatic evolutionary past that far predates our primate mode of existence. Now apply that to the experience of “mind.”

Why does Jaynes’ bicameral mind theory sound so fundamentally crackpot to modern minds? It could be that the notion is actually crackpot, but you cannot easily dismiss the idea that it’s actually a well-posed notion that only appears crackpot because we are not currently possessed of bicameral mind-experiences (modulo cognitive marginalia like tulpas and internal family systems — one of my attention/taste biases is to index strongly on typical rather than rare mental experiences; I believe the significance of the latter is highly overstated due to the personal significance they acquire in individual lives).

I hope it is obvious why the possibility that the experience of mind is subject to evolution is fatal to traditional phenomenology. If despite all the sophistication of your cognitive toolchain (bracketing, jhanas, ketamine, whatever), it turns out that you’re only exploring the limits of the evolutionarily transient and arbitrary “variety of mind” that we happen to experience, what does that say about the reliability of the resulting supposedly objective or “direct” perceptions of reality itself that such a mind mediates?

This, by the way, is a problem that evolutionary terms of reference make elegantly obvious, but you can get here in other ways. Darwinian evolution is convenient scaffolding to get there (and the one I think Dennett used), but ultimately dispensable. But however you get there, the possibility that experiences of mind are relative to contingent and arbitrary evolutionary circumstances is fatal to the conceits of traditional phenomenology. It reduces traditional phenomenology in status to any old sort of Cartesian or Platonic philosophizing with made-up bullshit schemas. You might as well make 2x2s all day like I sometimes do.

The Eastern response to this quandary has traditionally been rather defeatist — abandoning the project of trying to know reality entirely. Buddhist and Advaita philosophies in particular, tend to dispense with “objective reality” as an ontologically meaningful characterization of anything. There is only nothing. Or only the perceiving subject. Everything else is maya-moh, a sentimental attachment to the ephemeral unreal. Snap out of it.


I suspect Western philosophy was starting to head that way in the 16th century (through the Spinoza-vs-Leibniz shadowboxing years), but was luckily steered down a less defeatist path to a somewhat uneasy detente between a sort of “probationary reality” accessed through technologically augmented senses, and a subjectivity resolutely bound to that probationary reality via the conceits of traditional phenomenology. This is a long-winded way of saying “science happened” to Western philosophy.

I think that detente is breaking down. One sign is the growing popularity of the relatively pedestrian metaphysics of physicists like Donald Hoffman (leading to a certain amount of unseemly glee among partisans of Eastern philosophies — “omigod you think quantum mechanics shows reality is an illusion? Welcome to advaita lol”).

But despite these marginally interesting conversations, and whether you get there via Husserl, Hoffman, or vipassana, we’re no closer to resolving what we might call the fundamental paradox of phenomenology. If our experience of mind is contingent, how can any notion of justifiable absolute knowledge be sustained? We are essentially stopped clocks trying to tell the time.

Dennett, I think favored one sort of answer: That the experience of mind was too untrustworthy and transient to build on, but that mind’s experience of mathematics was both trustworthy and absolute. Bicameral or monocameral, dolphin-brain or primate-brain, AI-brain or Hoffman-optimal ontological apparatus, one thing that is certain is that a prime number is a prime number in all ways that reality (probationary or not, illusory or not) collides with minds (typical or atypical, bursting with exotic qualia or full of trash qualia). Even the 17 and 13 year cicadas agree. Prime numbers constitute a fixed point in all the ways mind-like things have experience-like things in relation to reality-like things, regardless of whether minds, experiences, and reality are real. Prime numbers are like a motif that shows up in multiple unreliable dreams. If you’re going to build up a philosophy of being, you should only use things like prime numbers.

This is not just the most charitable interpretation of Dennett’s philosophy, but the most interesting and powerful one. It’s not that he thought of the mysterian weakness for ineffable experiences as being particularly “illusory”. As far as he was concerned, you could dismiss the “experience of mind” in its entirety as irrelevant philosophically. Even the idea that it has an epiphenomenal reality need not be seriously entertained because the thing that wants to entertain that idea is not to be trusted.

You see signs of this approach in a lot of his writing. In his collaborative enquires with Hofstadter, in his fundamentally algorithmic-mathematical account of evolution, in his seemingly perverse stances in debates both with reputable philosophers of mind and disreputable intelligent designers. As far as he was concerned, anyone who chose to build any theory of anything on the basis of anything other than mathematical constancy was trusting the experience of mind to an unjustifiable degree.

Again, I don’t know if he ever said as much explicitly (he probably did), but I suspect he had a basic metaphysics similar to that of another simpatico thinker on such matters, Roger Penrose: as a triad of physical/mental/platonic-mathematical worlds projecting on to each other in a strange loop. But unlike Penrose, who took the three realms to be equally real (or unreal) and entangled in an eternal dance of paradox, he chose to build almost entirely on the Platonic-mathematical vertex, with guarded phenomenological forays to the physical world, and strict avoidance of the mental world as a matter of epistemological hygiene.


The guarded phenomenological forays, unlike those of traditional phenomenologists, were governed by an allow list rather than a block list. Instead of trying to “block out” suspect conceptual commitments with bracketing or meditative discipline, he made sure to only work with allowable concepts and percepts that seemed to have some sort of mathematical bones to them. So Turing machines, algorithms, information theory, and the like, all made it into his thinking in load-bearing ways. Everything else was at best narrative flavor or useful communication metaphors. People who took anything else seriously were guilty of deep procedural illusions rather than shallow intellectual confusions.

If you think about it, his accounts of AI, evolution, and the human mind make a lot more sense if you see them as outcomes of philosophical construction processes governed by one very simple rule: Only use a building block if it looks mathematically real.

Regardless of what you believe about the reality of things other than mathematically underwritten ones, this is an intellectually powerful move. It is a kind of computational constructionism applied to philosophical inquiry, similar to what Wolfram does with physics on automata or hypergraphs, or what Grothendieck did with mathematics.

It is also far harder to do, because philosophy aims and claims to speak more broadly and deeply than either physics or mathematics.

I think Dennett landed where he did, philosophically, because he was essentially trying to rebuild the universe out of a very narrow admissible subset of the phenomenological experience of it. Mysterian musings didn’t make it in because they could not ride allowable percepts and concepts into the set of allowable construction materials.

In other words, he practiced demiurge phenomenology. Natural philosophy as an elaborate construction practice based on self-given rules of construction.

In adopting such an approach he was ahead of his time. We’re on the cusp of being able to literally do what he tried to do with words — build phenomenologically immersive virtual realities out of computational matter that seem to be defined by nothing more than mathematical absolutes, and have almost no connection even to physical reality, thanks to the seeming buffering universality of Turing-equivalent computation.

In that almost, I think, lies the key to my fundamental disagreement with Dennett, and my willingness to wander in magical realms of thought where mathematically sure-footed angels fear to tread. There are… phenomenological gaps between mathematical reconstructions of reality by energetic demiurges (whether they work with powerful arguments or VR headsets) and reality itself.

The biggest one, in my opinion, is the experience of time, which seems to oddly resist computational mathematization (though Stephen Wolfram claims to have one… but then he claims to have a lot of things). In an indirect way, disagreeing with Dennett at age 20 led me to my lifelong fascination with the philosophy of time.

Where to Next?

It is something of a cliche that over the last century or two, philosophy has gradually and reluctantly retreated from an increasing number of the domains it once claimed as its own, as scientific and technological advances rendered ungrounded philosophical ideas somewhere between moot and ridiculous. Bergson retreating in the face of the Einsteinian assault, ceding the question of the nature of time to physics, is probably as good a historical marker of the culmination of the process as any.

I would characterize Dennett as a late modernist philosopher in relation to this cliche. Unlike many philosophers, who simply gave up on trying to provide useful accounts of things that science and technology were beginning to describe in inexorably more powerful ways, he brought enormous energy to the task of simply keeping up. His methods were traditional, but his aim was radical: Instead of trying to provide accounts of things, he tried to provide constructions of things, aiming to arrive at a sense of the real through philosophical construction with admissible materials. He was something like Brouwer in mathematics, trying to do away with suspect building blocks to get to desirable places only using approved methods.

This actually worked very well, as far as it went. For example, I think his philosophy of mind was almost entirely correct as far as the mechanics of cognition go, and the findings of modern AI vindicate a lot of the specifics. For example, his idea of a “multiple drafts” model of cognition (where one part of the brain generates a lot of behavioral options in a bottom-up, anarchic way, and another part chooses a behavior from among them) is basically broadly correct, not just as a description of how the brain works, but of how things like LLMs work. But unlike many other so-called philosophers of AI he disagreed with, like Nick Bostrom, Dennett’s views managed to be provocative without being simplistic, opinionated without being dogmatic. He appeared to have a Strong AI stance similar to many people I disagree with, but unlike most of those people, I found his views worth understanding with some care, and hard to dismiss as wrong, let alone not-even-wrong.

I like to think he died believing his philosophies — of mind, AI, and Darwinism — to be on the cusp of a triumphant redemption. There are worse ways to go than believing your ideas have been thoroughly vindicated. And indeed, there was a lot Dennett got right. RIP.

Where do we go next with Dennettian questions about AI, minds, and evolution?

Oddly enough, I think Dennett himself pointed the way: demiurge phenomenology is the way. We just need to get more creative with it, and admit magical thinking into the process.

Dennett, I think, approached his questions the way some mathematicians originally approached Euclid’s fifth postulate: Discard it and try to either do without, or derive it from the other postulates. That led him to certain sorts of demiurgical constructions of AI, mind, and evolution.

There is another, equally valid way. Just as other mathematicians replaced the fifth postulates with alternatives and ended up with consistent non-Euclidean geometries, I think we could entertain different mysterian postulates and end up with a consistent non-Dennettian metaphysics of AI, mind, and evolution. You’d proceed by trying to do your own demiurgical constructing of a reality. An alternate reality.

For instance, what happens if you simply assume that there is human “mind stuff” that ends with death and cannot be uploaded or transferred to other matter, and that can never emerge in silico. You don’t have to try accounting for it (no need to mess with speculations about the pineal gland like Descartes, or worry about microtubules and sub-Planck-length phenomena like Penrose). You could just assume that consciousness is a thing like space or time, and run with the idea and see where you land and what sort of consistent metaphysical geometries are possible. This is in fact what certain philosophers of mind like Ned Block do.

The procedure can be extended to other questions as well. For instance, if you think Darwin is not the whole story with evolution, you could simply assume there are additional mathematical selection factors having to do with fractals or prime numbers, and go look for them, as the Santa Fe biologists have done. Start simple and stupid, for example, by applying a rule that “evolution avoids rectangles” or “evolution cannot get to wheels made entirely of grown organic body parts” and see where you land (for the latter, note that the example in Dark Materials trilogy cheats — that’s an assembled wheel, not an evolved one).

But all these procedures follow the basic Dennettian approach of demiurgical constructionist phenomenology. Start with your experiences. Let in an allow-list of percepts as concepts. Add an arbitrarily constructed magical suspicion or two. Let your computer build out the entailments of those starter conditions. See what sort of realities you can conjure into being. Maybe one of them will be more real than your current experience of reality. That would be progress. Perhaps progress only you can experience, but still, progress.

Would such near-solipsistic activities constitute a collective philosophical search for truth? I don’t know. But then, I don’t know if we have ever been on a coherent collective philosophical search for truth. All we’ve ever had is more or less satisfying descriptions of the primal mystery of our own individual existence.

Why is there something, rather than nothing, it is like, to be me?

Ultimately, Dennett did not seem to find that question to be either interesting or serious. But he pointed the way for me to start figuring out why I do. And that’s why I too am a Dennettian.


footnote  1
I found the book in my uncle’s library, and the only reason I picked it up was because I recognized Hofstadter’s name because Godel, Escher, Bach had recently been recommended to me. I think it’s one of the happy accidents of my life that I read The Mind’s I before I read Hofstadter’s Godel, Escher, Bach. I think that accident of path-dependence may have made me a truly philosophical engineer as opposed to just an engineer with a side interest in philosophy. Hofstadter is of course much better known and familiar in the engineering world, and reading him is something of a rite of passage in the education of the more sentimental sorts of engineers. But Hofstadter’s ideas were mostly entertaining and informative for me, in the mode of popular science, rather than impactful. Dennett on the other hand, was impactful.