Showing posts with label unconscious. Show all posts
Showing posts with label unconscious. Show all posts

Monday, April 29, 2024

An expanded view of human minds and their reality.

I want to pass on this recent essay by Venkatesh Rao in its entirety, because it has changed my mind on agreeing with Daniel Dennett that the “Hard Problem” of consciousness is a fabrication that doesn’t actually exist. There are so many interesting ideas in this essay that I will be returning to it frequently in the future.  

We Are All Dennettians Now

An homage riff on AI+mind+evolution in honor of Daniel Dennett

The philosopher Daniel Dennett (1942-2024) died last week. Dennett’s contributions to the 1981 book he co-edited with Douglas Hofstadter, The Mind’s I,¹ which I read in 1996 (rather appropriately while doing an undergrad internship at the Center for AI and Robotics in Bangalore), helped shape a lot of my early philosophical development. A few years later (around 1999 I think), I closely read his trollishly titled 1991 magnum opus, Consciousness Explained (alongside Steven Pinker’s similar volume How the Mind Works), and that ended up shaping a lot of my development as an engineer. Consciousness Explained is effectively a detailed neuro-realistic speculative engineering model of the architecture of the brain in a pseudo-code like idiom. I stopped following his work closely at that point, since my tastes took me in other directions, but I did take care to keep him on my radar loosely.

So in his honor, I’d like to (rather chaotically) riff on the interplay of the three big topics that form the through-lines of his life and work: AI, the philosophy of mind, and Darwinism. Long before we all turned into philosophers of AI overnight with the launch of ChatGPT, he defined what that even means.

When I say Dennett’s views shaped mine, I don’t mean I necessarily agreed with them. Arguably, your early philosophical development is not shaped by discovering thinkers you agree with. That’s for later-life refinements (Hannah Arendt, whom I first read only a few years ago, is probably the most influential agree-with philosopher for me). Your early development is shaped by discovering philosophers you disagree with.

But any old disagreement will not shape your thinking. I read Ayn Rand too (if you want to generously call her a philosopher) around the same time I discovered Dennett, and while I disagreed with her too, she basically had no effect on my thinking. I found her work to be too puerile to argue with. But Dennett — disagreeing with him forced me to grow, because it took serious work over years to decades — some of it still ongoing — to figure out how and why I disagreed. It was philosophical weight training. The work of disagreeing with Dennett led me to other contemporary philosophers of mind like David Chalmers and Ned Block, and various other more esoteric bunnytrails. This was all a quarter century ago, but by the time I exited what I think of as the path-dependent phase of my philosophical development circa 2003, my thinking bore indelible imprints of Dennett’s influence.

I think Dennett was right about nearly all the details of everything he touched, and also right (and more crucially, tasteful) in his choices of details to focus on as being illuminating and significant. This is why he was able to provide elegant philosophical accounts of various kinds of phenomenology that elevated the corresponding discourses in AI, psychology, neuroscience, and biology. His work made him a sort of patron philosopher of a variety of youngish scientific disciplines that lacked robust philosophical traditions of their own. It also made him a vastly more relevant philosopher than most of his peers in the philosophy world, who tend, through some mix of insecurity, lack of courage, and illiteracy, to stay away from the dirty details of technological modernity in their philosophizing (and therefore cut rather sorry figures when they attempt to weigh in on philosophy-of-technology issues with cartoon thought experiments about trolleys or drowning children). Even the few who came close, like John Searle, could rarely match Dennett’s mastery of vast oceans of modern techno-phenomenological detail, even if they tended to do better with clever thought experiments. As far as I am aware, Dennett has no clever but misleading Chinese Rooms or Trolley Problems to his credit, which to my mind makes him a superior rather than inferior philosopher.

I suspect he paid a cost for his wide-ranging, ecumenical curiosities in his home discipline. Academic philosophers like to speak in a precise code about the simplest possible things, to say what they believe to be the most robust things they can. Dennett on the other hand talked in common language about the most complex things the human mind has ever attempted to grasp. The fact that he got his hands (and mind!) dirty with vast amounts of technical detail, and dealt in facts with short half-lives from fast-evolving fields, and wrote in a style accessible to any intelligent reader willing to pay attention, made him barely recognizable as a philosopher at all. But despite the cosmetic similarities, it would be a serious mistake to class him with science popularizers or TED/television scientists with a flair for spectacle at the expense of substance.

Though he had a habit of being uncannily right about a lot of the details, I believe Dennett was almost certainly wrong about a few critical fundamental things. We’ll get to what and why later, but the big point to acknowledge is that if he was indeed wrong (and to his credit, I am not yet 100% sure he was), he was wrong in ways that forced even his opponents to elevate their games. He was as much a patron philosopher (or troll or bugbear) to his philosophical rivals as to the scientists of the fields he adopted. You could not even be an opponent of Dennett except in Dennettian ways. To disagree with the premises of Strong AI or Dennett’s theory of mind is to disagree in Dennettian ways.

If I were to caricature how I fit in the Dennettian universe, I suspect I’d be closest to what he called a “mysterian” (though I don’t think the term originated with him). Despite mysterian being something of a dismissive slur, it does point squarely at the core of why his opponents disagree with him, and the parts of their philosophies they must work to harden and make rigorous, to withstand the acid forces of the peculiarly Dennetian mode of scrutiny I want to talk about here.

So to adapt the line used by Milton Friedman to describe Keynes: We are all Dennettians now.

Let’s try and unpack what that means.

Mysterianism

As I said, in Dennettian terms, I am a “mysterian.” At a big commitments level, mysterianism is the polar opposite of the position Dennett consistently argued across his work, a version of what we generally call a “Strong AI” position. But at the detailed level, there are no serious disagreements. Mysterians and Strong AI people agree about most of the details of how the mind works. They just put the overall picture together differently because mysterians want to accommodate certain currently mysterious things that Strong AI people typically reject as either meaningless noise or shallow confusions/illusions.

Dennett’s version of Strong AI was both more robustly constructed than the sophomoric versions one typically encounters, and more broadly applied: beyond AI to human brains and seemingly intelligent processes like evolution. Most importantly, it was actually interesting. Reading his accounts of minds and computers, you do not come away with the vague suspicion of a non-neurotypical succumbing to the typical-mind fallacy and describing the inner life of a robot or philosophical zombie as “truth.” From his writing, it sounds like he had a fairly typical inner-life experience, so why did he seem to deny the apparent ineffable essence of it? Why didn’t he try to eff that essence the way Chalmers, for instance, does? Why did he seemingly dismiss it as irrelevant, unreal, or both?

To be a mysterian in Dennettian terms is to take ineffable, vitalist essences seriously. With AIs and minds, it means taking the hard problem of consciousness seriously. With evolution, it means believing that Darwinism is not the whole story. Dennett tended to use the term as a dismissive slur, but many, (re)claim it as a term of approbation, and I count myself among them.

To be a rigorous mysterian, as opposed to one of the sillier sorts Dennett liked to stoop to conquer (naive dualists, intelligent-designers, theological literalists, overconfident mystics…), you have to take vitalist essences “seriously but not literally.” My version of doing that is to treat my vitalist inclinations as placeholder pointers to things that lurk in the dank, ungrokked margins of the thinkable, just beyond the reach of my conceptualizing mind. Things I suspect exist by the vague shapes of the barely sensed holes they leave in my ideas. In pursuit of such things, I happily traffic in literary probing of Labatutian/Lovecraftian/Ballardian varieties, self-consciously magical thinking, junk from various pre-paradigmatic alchemical thought spaces, constructs that uncannily resemble astrology, and so on. I suppose it’s a sort of intuitive-ironic cognitive kayfabe for the most part, but it’s not entirely so.

So for example, when I talk of elan vital, as I frequently do in this newsletter, I don’t mean to imply I believe in some sort of magical fluid flowing through living things or a Gaian planetary consciousness. Nor do I mean the sort of overwrought continental metaphysics of time and subjectivity associated with Henri Bergson (which made him the darling of modernist literary types and an object of contempt to Einstein). I simply mean I suspect there are invisible things going on in the experience and phenomenology of life that are currently beyond my ability to see, model, and talk about using recognizably rational concepts, and I’d rather talk about them as best I can with irrational concepts than pretend they don’t exist.

Or to take another example, when I say that “Darwin is not the whole story,” I don’t mean I believe in intelligent design or a creator god (I’m at least as strong an atheist as Dennett was). I mean that Darwinian principles of evolution constrain but do not determine the nature of nature, and we don’t yet fully grok what completes the picture except perhaps in hand-wavy magical-thinking ways. To fully determine what happens, you need to add more elements. For example, you can add ideas like those of Stuart Kauffman and other complexity theorists. You could add elements of what Maturana and Varela called autopoiesis. Or it might be none of these candidate hole-filling ideas, but something to be dreamt up years in the future. Or never. But just because there are only unsatisfactory candidate ways for talking about stuff doesn’t mean you should conclude the stuff doesn’t exist.

In all such cases, there are more things present in phenomenology I can access than I can talk about using terms of reference that would be considered legitimate by everybody. This is neither known-unknowns (which are holes with shapes defined by concepts that seem rational), nor unknown-unknown (which have not yet appeared in your senses, and therefore, to apply a Gilbert Ryle principle, cannot be in your mind).

These are things that we might call magically known. Like chemistry was magically known through alchemy. For phenomenology to be worth magically knowing, the way-of-knowing must offer interesting agency, even if it doesn’t hang together conceptually.

Dennett seemed to mostly fiercely resist and reject such impulses. He genuinely seemed to think that belief in (say) the hard problem of consciousness was some sort of semantic confusion. Unlike say B. F. Skinner, whom critics accused of only pretending to not believe in inner processes, Dennett seemed to actually disbelieve in them.

Dennett seemed to disregard a cousin to the principle that absence of evidence is not evidence of absence: Presence of magical conceptualizations does not mean absence of phenomenology. A bad pointer does not disprove the existence of what it points to. This sort of error is easy to avoid in most cases. Lightning is obviously real even if some people seem to account for it in terms of Indra wielding his vajra. But when we try to talk of things that are on the phenomenological margins, barely within the grasp of sensory awareness, or worse, potentially exist as incommunicable but universal subjective phenomenology (such as the experience of the color “blue”), things get tricky.

Dennett was a successor of sorts to philosophers like Gilbert Ryle, and psychologists like B. F. Skinner. In evolutionary philosophy, his thinking aligned with people like Richard Dawkins and Steven Pinker, and against Noam Chomsky (often classified as a mysterian, though I think the unreasonable effectiveness of LLMs kinda vindicates to a degree Chomsky’s notions of an ineffable more-than-Darwin essence around universal grammars that we don’t yet understand).

I personally find it interesting to poke at why Dennett took the positions he took, given that he was contemplating the same phenomenological data and low-to-mid-level conceptual categories as the rest of us (indeed, he supplied much of it for the rest of us). One way to get at it is to ask: Was Dennett a phenomenologist? Are the limits of his ideas are the limits of phenomenology?

I think the answers are yes and yes, but he wasn’t a traditional sort of phenomenologist, and he didn’t encounter the more familiar sorts of limits.

The Limits of Phenomenology

Let’s talk regular phenomenology first, before tackling what I think was Dennett’s version.

I think of phenomenology, as a working philosophical method, to be something like a conceited form of empiricism that aims to get away from any kind of conceptually mediated seeing.

When you begin to inquire into a complex question with any sort of fundamentally empirical approach, your philosophy can only be as good as a) the things you know now through your (potentially technologically augmented) senses and b) the ways in which you know those things.

The conceit of phenomenology begins with trying to “unknow” what is known to be known, and contemplate the resulting presumed “pure” experiences “directly.” There are various flavors of this: Husserlian bracketing in the Western tradition, Zen-like “beginner mind” practices, Vipassana style recursive examination of mental experiences, and so on. Some flavors apply only to sense-observations of external phenomena, others apply only to subjective introspection, and some apply to both. Given the current somewhat faddish uptick in Eastern-flavored disciplines of interiority, it is important to take note of the fact that the phenomenological attitude is not necessarily inward-oriented. For example, the 19th century quest to measure a tenth of a second, and factor out the “human equation” in astronomical observations, was a massive project in Western phenomenology. The abstract thought experiments with notional clocks in the theory of relativity began with the phenomenology of real clocks.

In “doing” phenomenology, you are assuming that you know what you know relatively completely (or can come to know it), and have a reliable procedure for either unknowing it, or systematically alloying it with skeptical doubt, to destabilize unreliable perceptions it might be contributing to. Such destabilizability of your default, familiar way of knowing, in pursuit of a more-perfect unknowing, is in many ways the essence of rationality and objectivity. It is the (usually undeclared) starting posture for doing “science,” among other things.

Crucially, for our purposes in this essay, you do not make a careful distinction between things you know in a rational way and things you know in a magical or mysterian way, but effectively assume that only the former matter; that the latter can be trivially brushed aside as noise signifying nothing that needs unknowing. I think the reverse is true. It is harder, to the point of near impossible, to root out magical ideas from your perceptions, and they signify the most important things you know. More to the point, it is not clear that trying to unknow things, especially magical things, is in fact a good idea, or that unknowing is clarifying rather than blinding. But phenomenology is committed to trying. This has consequences for “phenomenological projects” of any sort, be they Husserlian or Theravadan in spirit.

A relatively crude example: “life” becomes much less ineffable (and depending on your standards, possibly entirely drained of mystery) once you view it through the lens of DNA. Not only do you see new things through new tools, you see phenomenology you could already see, such as Mendelian inheritance, in a fundamentally different way that feels phenomenologically “deeper” when in fact it relies on more conceptual scaffolding, more things that are invisible to most of us, and requires instruments with elaborate theories attached to even render intelligible. You do not see “ATCG” sequences when contemplating a pea flower. You could retreat up the toolchain and turn your attention to how instruments construct the “idea of DNA” but to me that feels like a usually futile yak shave. The better thing to do is ask why a more indirect way of knowing somehow seems to perceive more clearly than more direct ways.

It is obviously hard to “unsee” knowledge of DNA today when contemplating the nature of life. But it would have been even harder to recognize that something “DNA shaped” was missing in say 1850, regardless of your phenomenological skills, by unknowing things you knew then. In fact, clearing away magical ways of knowing might have swept away critical clues.

To become aware, as Mendel did, that there was a hidden order to inheritance in pea flowers, takes a leap of imagination that cannot be purely phenomenological. To suspect in 1943, as Schrodinger did, the existence of “aperiodic covalent bonded crystals” at the root of life, and point the way to DNA, takes a blend of seeing and knowing that is greater than either. Magical knowing is pathfinder-knowing that connects what we know and can see to what we could know and see. It is the bootstrapping process of the mind.

Mendel and Schrodinger “saw” DNA before it was discovered, in terms of reference that would have been considered “rational” in their own time, but this has not always been the case. Newton, famously, had a lot of magical thinking going on in his successful quest for a theory of gravity. Kepler was a numerologist. Leibniz was ball of mad ideas. One of Newton’s successful bits of thinking, the idea of “particles” of light, which faced off against Huygens’ “waves,” has still not exited the magical realm. The jury is still out in our time about whether quantized fields are phenomenologically “real” or merely a convenient mnemonic-metaphoric motif for some unexpected structure in some unreasonably effective math.

Arguably, none of these thinkers was a phenomenologist, though all had a disciplined empirical streak in their thinking. The history of their ideas suggests that phenomenology is no panacea for philosophical troubles with unruly conceptual universes that refuse to be meekly and rationally “bracketed” away. There is no systematic and magic-free way to march from current truths to better ones via phenomenological disciplines.

The fatal conceit of naive phenomenology (which Paul Feyerabend spotted) is the idea that there is privileged reliable (or meta-reliable) “technique” of relating to your sense experiences, independent of the concepts you hold, whether that “technique” is Husserlian bracketing or vipassana. Understood this way, theories of reality are not that different from physical instruments that extend our senses. Experiment and theory don’t always expose each other’s weaknesses. Sometimes they mutually reinforce them.

In fact, I would go so far as to suggest—and I suspect Dennett would have agreed—that there is no such thing as phenomenology per se. All we ever see is the most invisible of our theories (rational and magical), projected via our senses and instruments (which shape, and are shaped by, those theories), onto the seemingly underdetermined aspects of the universe. There are only incomplete ways of knowing and seeing within which ideas and experiences are inextricably intertwined. No phenomenological method can consistently outperform methodological anarchy.

To deny this is to be a traditional phenomenologist, striving to procedurally separate the realm of ideas and concepts from the realm of putatively unfactored and “directly perceived” (a favorite phrase of meditators) “real” experiences.

Husserlian bracketing — “suspending trust in the objectivity of the world” — is fine in theory, but not so easy in practice. How do you know that you’re setting aside preconceived notions, judgments, and biases and attending to a phenomenon as it truly is? How do you set aside the unconscious “theory” that the Sun revolves around the Earth, and open your mind to the possibility that it’s the other way around? How do you “see” DNA-shaped holes in current ways of seeing, especially if they currently manifest as strange demons that you might sweep away in a spasm of over-eager epistemic hygiene? How do you relate, as a phenomenologist, to intrinsically conceptual things like electrons and positrons that only exist behind layers of mathematics describing experimental data processed through layers of instrumentation conceived by existing theories? If you can’t check the math yourself, how can you trust that the light bulb turning on is powered by those “electrons” tracing arcs through cloud chambers?

In practice, we know how such shifts actually came about. Not because philosophers meditated dispassionately on the “phenomenology” with free minds seeing reality as it “truly is,” but because astronomers and biologists with heads full of weird magical notions looked through telescopes and microscopes, maintained careful notes of detailed measurements, informed by those weird magical theories, and tried to account for discrepancies. Tycho Brahe, for instance, who provided the data that dethroned Ptolemy, believed in some sort of Frankenstein helio-geo-centric Ptolemy++ theory. Instead of explaining the discrepancies, as Kepler did later, Brahe attempted to explain them away using terms of reference he was attached to. He failed to resolve the tension. But he paved the way to Kepler resolving that particular tension (who of course introduced new ones, while lost in his own magical thinking about platonic solids). Formally phenomenological postures were not just absent from the story, but would have arguably retarded it by being too methodologically conservative.

Phenomenology, in other words, is something of a procedural conceit. An uncritical trust in self-certifying ways of seeing based entirely on how compelling they seem to the seer. The self-certification follows some sort of seemingly rational procedure (which might be mystical but still rational in the sense of being coherent and disciplined and internally consistent) but ultimately derives its authority from the intuitive certainties and suspicions of the perceiving subject. Phenomenological procedures are a kind of rule-by-law for governing sense experience in a laissez-faire way, rather than the “objective” rule-of-law they are often presented as. Phenomenology is to empiricism as “socialism with Chinese characteristics” is to liberal democracy.

This is not to say phenomenology is hopelessly unreliable or useless. All methodologies have their conceits, which manifest as blindspots. With phenomenology, the blindspot manifests as an insistence on non-magicality. The phenomenologist fiercely rejects the Cartesian theater and the varied ghosts-in-machines that dance there. The meditator insists he is “directly perceiving” reality in a reproducible way, no magic necessary. I do not doubt that these convictions are utterly compelling to those who hold them; as compelling as the incommunicable reality of perceiving “blue” is to everybody. I have no particular argument with such insistence. What I actually have a problem with is the delegitimization of magical thinking in the process, which I suspect to be essential for progress.

My own solution is to simply add magical thinking back into the picture for my own use, without attempting to defend that choice, and accepting the consequences.

For example, I take Myers-Briggs and the Enneagram seriously (but not literally!). I believe in the hard problem of consciousness, and therefore think “upload” and “simulationism” ideas are not-even-wrong. I don’t believe in Gods or AGIs, and therefore don’t see the point of Pascal’s wager type contortions to avoid heaven/hell or future-simulated-torture scenarios. In each case my commitments rely on chains of thought that are at least partly magical thinking, and decidedly non-phenomenological, which has various social consequences in various places. I don’t attempt to justify any of it because I think all schemes of justification, whether they are labeled “science” or something else, rest on traditional phenomenology and its limits.

Does this mean solipsism is the best we can hope for? This is where we get back to Dennett.

Dennett, to his credit, I don’t think he was a traditional phenomenologist, and he mostly avoided all the traps I’ve pointed out, including the trap of solipsism. Nor was he what one might call a “phenomenologist of language” like most modern analytical philosophers in the West. He was much too interested in technological modernity (and the limits of thought it has been exposing for a century) to be content with such a shrinking, traditionalist philosophical range.

But he was a phenomenologist in the broader sense of rejecting the possible reality of things that currently lack coherent non-magical modes of apprehension.

So how did he operate if not in traditional phenomenological ways?

Demiurge Phenomenology

I believe Dennett was what we might call a demiurge phenomenologist, which is a sort of late modernist version of traditional phenomenology. It will take a bit of work to explain what I mean by that.

I can’t recall if he ever said something like this (I’m definitely not a completist with his work and have only read a fraction of his voluminous output), but I suspect Dennett believed that the human experience of “mind” is itself subject to evolutionary processes (think Jaynes and bicameral mind theories for example — I seem to recall him saying something approving about that in an interview somewhere). He sought to construct philosophy in ways that did not derive authority from an absolute notion of the experience of mind. He tried to do relativity theory for minds, but without descending into solipsism.

It is easiest to appreciate this point by starting with body experience. For example, we are evolved from creatures with tails, but we do not currently possess tails. We possess vestigial “tail bones” and presumably bits of DNA relevant to tails, but we cannot know what it is like to have a tail (or in the spirit of mysterian philosopher Thomas Nagel’s What is it Like to Be a Bat provocation, which I first read in The Mind’s I, what it is like for a tailed creature to have a tail).

We do catch tantalizing Lovecraftian-Ballardian glimpses of our genetic heritage though. For example, the gasping reflex and shot of alertness that accompanies being dunked in water (the mammalian dive reflex) is a remnant of a more aquatic evolutionary past that far predates our primate mode of existence. Now apply that to the experience of “mind.”

Why does Jaynes’ bicameral mind theory sound so fundamentally crackpot to modern minds? It could be that the notion is actually crackpot, but you cannot easily dismiss the idea that it’s actually a well-posed notion that only appears crackpot because we are not currently possessed of bicameral mind-experiences (modulo cognitive marginalia like tulpas and internal family systems — one of my attention/taste biases is to index strongly on typical rather than rare mental experiences; I believe the significance of the latter is highly overstated due to the personal significance they acquire in individual lives).

I hope it is obvious why the possibility that the experience of mind is subject to evolution is fatal to traditional phenomenology. If despite all the sophistication of your cognitive toolchain (bracketing, jhanas, ketamine, whatever), it turns out that you’re only exploring the limits of the evolutionarily transient and arbitrary “variety of mind” that we happen to experience, what does that say about the reliability of the resulting supposedly objective or “direct” perceptions of reality itself that such a mind mediates?

This, by the way, is a problem that evolutionary terms of reference make elegantly obvious, but you can get here in other ways. Darwinian evolution is convenient scaffolding to get there (and the one I think Dennett used), but ultimately dispensable. But however you get there, the possibility that experiences of mind are relative to contingent and arbitrary evolutionary circumstances is fatal to the conceits of traditional phenomenology. It reduces traditional phenomenology in status to any old sort of Cartesian or Platonic philosophizing with made-up bullshit schemas. You might as well make 2x2s all day like I sometimes do.

The Eastern response to this quandary has traditionally been rather defeatist — abandoning the project of trying to know reality entirely. Buddhist and Advaita philosophies in particular, tend to dispense with “objective reality” as an ontologically meaningful characterization of anything. There is only nothing. Or only the perceiving subject. Everything else is maya-moh, a sentimental attachment to the ephemeral unreal. Snap out of it.


I suspect Western philosophy was starting to head that way in the 16th century (through the Spinoza-vs-Leibniz shadowboxing years), but was luckily steered down a less defeatist path to a somewhat uneasy detente between a sort of “probationary reality” accessed through technologically augmented senses, and a subjectivity resolutely bound to that probationary reality via the conceits of traditional phenomenology. This is a long-winded way of saying “science happened” to Western philosophy.

I think that detente is breaking down. One sign is the growing popularity of the relatively pedestrian metaphysics of physicists like Donald Hoffman (leading to a certain amount of unseemly glee among partisans of Eastern philosophies — “omigod you think quantum mechanics shows reality is an illusion? Welcome to advaita lol”).

But despite these marginally interesting conversations, and whether you get there via Husserl, Hoffman, or vipassana, we’re no closer to resolving what we might call the fundamental paradox of phenomenology. If our experience of mind is contingent, how can any notion of justifiable absolute knowledge be sustained? We are essentially stopped clocks trying to tell the time.

Dennett, I think favored one sort of answer: That the experience of mind was too untrustworthy and transient to build on, but that mind’s experience of mathematics was both trustworthy and absolute. Bicameral or monocameral, dolphin-brain or primate-brain, AI-brain or Hoffman-optimal ontological apparatus, one thing that is certain is that a prime number is a prime number in all ways that reality (probationary or not, illusory or not) collides with minds (typical or atypical, bursting with exotic qualia or full of trash qualia). Even the 17 and 13 year cicadas agree. Prime numbers constitute a fixed point in all the ways mind-like things have experience-like things in relation to reality-like things, regardless of whether minds, experiences, and reality are real. Prime numbers are like a motif that shows up in multiple unreliable dreams. If you’re going to build up a philosophy of being, you should only use things like prime numbers.

This is not just the most charitable interpretation of Dennett’s philosophy, but the most interesting and powerful one. It’s not that he thought of the mysterian weakness for ineffable experiences as being particularly “illusory”. As far as he was concerned, you could dismiss the “experience of mind” in its entirety as irrelevant philosophically. Even the idea that it has an epiphenomenal reality need not be seriously entertained because the thing that wants to entertain that idea is not to be trusted.

You see signs of this approach in a lot of his writing. In his collaborative enquires with Hofstadter, in his fundamentally algorithmic-mathematical account of evolution, in his seemingly perverse stances in debates both with reputable philosophers of mind and disreputable intelligent designers. As far as he was concerned, anyone who chose to build any theory of anything on the basis of anything other than mathematical constancy was trusting the experience of mind to an unjustifiable degree.

Again, I don’t know if he ever said as much explicitly (he probably did), but I suspect he had a basic metaphysics similar to that of another simpatico thinker on such matters, Roger Penrose: as a triad of physical/mental/platonic-mathematical worlds projecting on to each other in a strange loop. But unlike Penrose, who took the three realms to be equally real (or unreal) and entangled in an eternal dance of paradox, he chose to build almost entirely on the Platonic-mathematical vertex, with guarded phenomenological forays to the physical world, and strict avoidance of the mental world as a matter of epistemological hygiene.


The guarded phenomenological forays, unlike those of traditional phenomenologists, were governed by an allow list rather than a block list. Instead of trying to “block out” suspect conceptual commitments with bracketing or meditative discipline, he made sure to only work with allowable concepts and percepts that seemed to have some sort of mathematical bones to them. So Turing machines, algorithms, information theory, and the like, all made it into his thinking in load-bearing ways. Everything else was at best narrative flavor or useful communication metaphors. People who took anything else seriously were guilty of deep procedural illusions rather than shallow intellectual confusions.

If you think about it, his accounts of AI, evolution, and the human mind make a lot more sense if you see them as outcomes of philosophical construction processes governed by one very simple rule: Only use a building block if it looks mathematically real.

Regardless of what you believe about the reality of things other than mathematically underwritten ones, this is an intellectually powerful move. It is a kind of computational constructionism applied to philosophical inquiry, similar to what Wolfram does with physics on automata or hypergraphs, or what Grothendieck did with mathematics.

It is also far harder to do, because philosophy aims and claims to speak more broadly and deeply than either physics or mathematics.

I think Dennett landed where he did, philosophically, because he was essentially trying to rebuild the universe out of a very narrow admissible subset of the phenomenological experience of it. Mysterian musings didn’t make it in because they could not ride allowable percepts and concepts into the set of allowable construction materials.

In other words, he practiced demiurge phenomenology. Natural philosophy as an elaborate construction practice based on self-given rules of construction.

In adopting such an approach he was ahead of his time. We’re on the cusp of being able to literally do what he tried to do with words — build phenomenologically immersive virtual realities out of computational matter that seem to be defined by nothing more than mathematical absolutes, and have almost no connection even to physical reality, thanks to the seeming buffering universality of Turing-equivalent computation.

In that almost, I think, lies the key to my fundamental disagreement with Dennett, and my willingness to wander in magical realms of thought where mathematically sure-footed angels fear to tread. There are… phenomenological gaps between mathematical reconstructions of reality by energetic demiurges (whether they work with powerful arguments or VR headsets) and reality itself.

The biggest one, in my opinion, is the experience of time, which seems to oddly resist computational mathematization (though Stephen Wolfram claims to have one… but then he claims to have a lot of things). In an indirect way, disagreeing with Dennett at age 20 led me to my lifelong fascination with the philosophy of time.

Where to Next?

It is something of a cliche that over the last century or two, philosophy has gradually and reluctantly retreated from an increasing number of the domains it once claimed as its own, as scientific and technological advances rendered ungrounded philosophical ideas somewhere between moot and ridiculous. Bergson retreating in the face of the Einsteinian assault, ceding the question of the nature of time to physics, is probably as good a historical marker of the culmination of the process as any.

I would characterize Dennett as a late modernist philosopher in relation to this cliche. Unlike many philosophers, who simply gave up on trying to provide useful accounts of things that science and technology were beginning to describe in inexorably more powerful ways, he brought enormous energy to the task of simply keeping up. His methods were traditional, but his aim was radical: Instead of trying to provide accounts of things, he tried to provide constructions of things, aiming to arrive at a sense of the real through philosophical construction with admissible materials. He was something like Brouwer in mathematics, trying to do away with suspect building blocks to get to desirable places only using approved methods.

This actually worked very well, as far as it went. For example, I think his philosophy of mind was almost entirely correct as far as the mechanics of cognition go, and the findings of modern AI vindicate a lot of the specifics. For example, his idea of a “multiple drafts” model of cognition (where one part of the brain generates a lot of behavioral options in a bottom-up, anarchic way, and another part chooses a behavior from among them) is basically broadly correct, not just as a description of how the brain works, but of how things like LLMs work. But unlike many other so-called philosophers of AI he disagreed with, like Nick Bostrom, Dennett’s views managed to be provocative without being simplistic, opinionated without being dogmatic. He appeared to have a Strong AI stance similar to many people I disagree with, but unlike most of those people, I found his views worth understanding with some care, and hard to dismiss as wrong, let alone not-even-wrong.

I like to think he died believing his philosophies — of mind, AI, and Darwinism — to be on the cusp of a triumphant redemption. There are worse ways to go than believing your ideas have been thoroughly vindicated. And indeed, there was a lot Dennett got right. RIP.

Where do we go next with Dennettian questions about AI, minds, and evolution?

Oddly enough, I think Dennett himself pointed the way: demiurge phenomenology is the way. We just need to get more creative with it, and admit magical thinking into the process.

Dennett, I think, approached his questions the way some mathematicians originally approached Euclid’s fifth postulate: Discard it and try to either do without, or derive it from the other postulates. That led him to certain sorts of demiurgical constructions of AI, mind, and evolution.

There is another, equally valid way. Just as other mathematicians replaced the fifth postulates with alternatives and ended up with consistent non-Euclidean geometries, I think we could entertain different mysterian postulates and end up with a consistent non-Dennettian metaphysics of AI, mind, and evolution. You’d proceed by trying to do your own demiurgical constructing of a reality. An alternate reality.

For instance, what happens if you simply assume that there is human “mind stuff” that ends with death and cannot be uploaded or transferred to other matter, and that can never emerge in silico. You don’t have to try accounting for it (no need to mess with speculations about the pineal gland like Descartes, or worry about microtubules and sub-Planck-length phenomena like Penrose). You could just assume that consciousness is a thing like space or time, and run with the idea and see where you land and what sort of consistent metaphysical geometries are possible. This is in fact what certain philosophers of mind like Ned Block do.

The procedure can be extended to other questions as well. For instance, if you think Darwin is not the whole story with evolution, you could simply assume there are additional mathematical selection factors having to do with fractals or prime numbers, and go look for them, as the Santa Fe biologists have done. Start simple and stupid, for example, by applying a rule that “evolution avoids rectangles” or “evolution cannot get to wheels made entirely of grown organic body parts” and see where you land (for the latter, note that the example in Dark Materials trilogy cheats — that’s an assembled wheel, not an evolved one).

But all these procedures follow the basic Dennettian approach of demiurgical constructionist phenomenology. Start with your experiences. Let in an allow-list of percepts as concepts. Add an arbitrarily constructed magical suspicion or two. Let your computer build out the entailments of those starter conditions. See what sort of realities you can conjure into being. Maybe one of them will be more real than your current experience of reality. That would be progress. Perhaps progress only you can experience, but still, progress.

Would such near-solipsistic activities constitute a collective philosophical search for truth? I don’t know. But then, I don’t know if we have ever been on a coherent collective philosophical search for truth. All we’ve ever had is more or less satisfying descriptions of the primal mystery of our own individual existence.

Why is there something, rather than nothing, it is like, to be me?

Ultimately, Dennett did not seem to find that question to be either interesting or serious. But he pointed the way for me to start figuring out why I do. And that’s why I too am a Dennettian.


footnote  1
I found the book in my uncle’s library, and the only reason I picked it up was because I recognized Hofstadter’s name because Godel, Escher, Bach had recently been recommended to me. I think it’s one of the happy accidents of my life that I read The Mind’s I before I read Hofstadter’s Godel, Escher, Bach. I think that accident of path-dependence may have made me a truly philosophical engineer as opposed to just an engineer with a side interest in philosophy. Hofstadter is of course much better known and familiar in the engineering world, and reading him is something of a rite of passage in the education of the more sentimental sorts of engineers. But Hofstadter’s ideas were mostly entertaining and informative for me, in the mode of popular science, rather than impactful. Dennett on the other hand, was impactful.




Wednesday, January 31, 2024

Thoughts on having a self - Deric's MindBlog as WebLog - January 2024

In the spirit of the original personal WebLogs of the 1990s that morphed into Blogs that enjoyed a golden era in the 2000s, I am offering readers a trial version of a putative series of posts containing selected and edited free standing clips from my personal  ‘mind journal’ - which is a subset of paragraphs taken from the larger personal journal that I have been maintaining for over 25 years.  Most of these paragraphs suggest perspectives on how our minds work, some are on random topics. I hope these perspectives might not seem too alien to readers, and possibly be found useful by a few.  Below are some mind journal paragraphs from January 2024.

1/2/24
The I* signifier,  (from a recent community.wakingup.com discussion) might be a good minimal token for expressing the space or process from which the present moment’s version of a self or I can appear. During moments of renewal or recharge,  when awareness first intuits this process, there can be an intense brief sense of naïveté, openness and joy - excitement at the prospect of novelty, experiencing new things. For original mind no activity is off the table.

1/3/2024
Has my timing been right a second time?

In the early 1990s I decided that the cream had been skimmed with respect to discovering the basic molecular steps that turn photons of light into a nerve signal in our eyes. Some of the steps were revealed by experiments in my laboratory. I decided to switch my attention to studying how our minds work, not in direct laboratory experiments, but through studying, writing about, and lecturing on the work of others.

Moving from the early to mid 2020s I’m feeling a second ‘the cream has been skimmed’ sentiment with respect to the biology of mind:  There is a general understanding and acknowledgement by the scientific and educated lay community that our illusory predictive selves are generated by impersonal neuronal nerve nets. There is no 'hard problem of consciousness’ - it is an illusion like everything else in our heads - and the main function of counter theories (explanations at the level of quantum physics, etc.) is to sustain the continued academic employment of those espousing them.

I feel like my 2022 UT Forum lecture, “New Perspectives on how our Minds Work” may have been a last hurrah with respect to studying the Biology of Mind, just as the 1996 Brain and Behavioral Sciences article was a last hurrah in my vision research career.

It is feeling like it's time to let go, to move on…perhaps to art, music, AI, studying the emergence of trans-human forms…..

1/3/24
…if other people choose behaviors that will lead to their demise there is little one can do, even with physical restraint and medication, to compel them to choose otherwise. They are performing a version of their I or self that is self destructive, and that they are unable to escape. 

Some are able to escape to feel redemption in surrender to a higher power, being ‘saved by the Lord’ or a secular equivalent such as non-dual awareness. Both are defined by yielding ultimate agency to something other than the experienced I or self - allowing a return to feeling the sense of security and repose of the newborn infant feeling loving care. As that infant begins to develop an I or ego it loses awareness of how much of its well-being depends on powers beyond its control and generates an illusory sense of agency.  

1/4/24
A general rule is to see people as they are, not as you want them to be. However, if you treat people as you want or expect them to be, not as they are, sometimes they might begin to slowly conform to your expectations.  This would be the basis of the effectiveness of Gandhi’s advice to “Be the change you want to see in the world.”

1/8/24 When attention is at bay, the gremlins will play, letting one’s disposition and temperament be molded more by outside input and less by internal reflection. The ending slide of several of my talks is the simple phrase “pay attention.”  The ability to do that is a good assay of biological aging and a predictor of longevity.

1/9/24
Attention doesn’t have to be ‘at tension.’ Priors that have pre-tensed muscles for the most probable action to be taken can sometimes be let go.

1/10/24
Now that I know that I can go to the engine room and reboot the Deric-OS with relative ease, there is no compelling reason to emphasize remaining there. What is needed is an appropriate balance between the brain’s attentional and default mode (mind wandering or rumination) systems,  just as with sympathetic and parasympathetic autonomic nervous systems. Going to extremes interferes with remaining gently attentive to one’s states of arousal, valence, and agency (A/V/A) and in touch with value, purpose, and meaning (V/P/M).

The point of paying attention is not to be in some sort of constant blissful or calm state, but rather just to be a normal creature. This journal is a useful present centered tool that modulates appropriate function by enhancing recall of the recent past and projected future.

There wants to be a spontaneous dance between intentional and mind wandering modes in the present moment. That way, one might manage to break the pattern of having a morning high energy caffeine fueled attentional focusing, which with the fading of the chemistry turns into an overly aimless mind wandering in the afternoon. Perhaps, if both could be kept in check, one might have a more seamless moving through a day, particularly as one’s energy begins to wane towards its end.  

1/14/24 Trying to describe the 'new platform' (Deric-OS, self center of gravity, I*, where experienced self is coming from) doing frequent resets to zero in the midst of change that toggle the system to problem solving, matching input to appropriate output, like a learning newborn. Short circuiting blips of arousal or fear that are irrelevant, but still able to run from a lion attack. Going with the computer metaphor, ‘processing platform’ as a candidate for a bit of language that describes what is indescribable in words.

1/17/24
Mulling over how little of my self (I*, it)  experience is spent outside of my linguistic narrative self thread.
 
1/18/24
Arousal/Valence/Agency (real/real/imagined) are the deep structure of the whole show, and well being occurs to the extent that the sliders are the the direction of low/high/high.

1/19/24
Construing oneself as kind and caring caretaker of family and friends yields value, purpose, and meaning (V/P/M), and integrates it with the machine room viscera.. And, the kindness and positivity of the caretaker role nudges the valence part of A/V/A towards being more positive. This supports being a caring presence that observes, listens, asks questions. Wishing the best for others, while letting their experiences and issues be their own.

1/19/24 The pre-linguistic animal platform as experienced center of gravity with language bits that rise from the simmering caldron to enable connections with other humans experienced as ephemeral transient wisps or vapors, with the real biological creature being the vastly larger originating presence, the creature's experienced place of rest and residence.

1/20/24
A grandiose fantasy: The Imperial Poobah, secure in its belief in itself as master of the random, the surfer of uncertainty. Ready to face the  “There be dragons there” description sometimes written on unexplored areas depicted on ancient maps of the world.

1/21/24 There is so little to provide a sustaining narrative in the current social and geopolitical context that expanding awareness towards its interoceptive, prelinguistic, gestural and prosodic animal state becomes more appealing and sustaining, along with letting awareness focus on potential remedies rather than further detailed descriptions of dysfunctions. Being active in pursuing small sanities.

1/23/24
Mulling over the calm and equanimity offered by impersonality, being 'it', the animal, just resting, watching. Also able to be kind and caring in response to input from others, offering sympathy, empathy, and accepting that one might influence but can not  compel fixes to problems that are not one’s own.

1/27/24
Mulling over how I continue to spew out chunks of ideas, presenting them in the flow of the present moment, from which they then recede to become part of a largely lost and unrecognized archive, still accessible in principle by searches - but I frequently have difficulty finding them. My golden bonbons of the moment, finding their resting place among their previous instances. Think of Andrew Sullivan’s once prominent blog ‘The Daily Dish,’ mostly unknown to the present. It doesn’t matter. One still keeps banging out the material.

Wednesday, November 29, 2023

Meta-Learned Models of Cognition

I pass on the text of a recent email from Behavioral and Brain Sciences inviting commentary on an article by Binz et al.  I am beginning to plow through the interesting text and figures - and will mention that motivated readers can obtain a PDF of the article from me.

Target Article: Meta-Learned Models of Cognition

Authors: Marcel Binz, Ishita Dasgupta, Akshay K. Jagadish, Matthew Botvinick, Jane X. Wang, and Eric Schulz

Deadline for Commentary Proposals: Wednesday, December 20, 2023

Abstract: Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

Keywords: meta-learning, rational analysis, Bayesian inference, cognitive modeling, neural networks

Monday, January 23, 2023

Our different styles of thinking.

An interesting recent article by Joshua Rothman, the ideas editor of newyorker.com, notes several recent books that describe different styles of thinking. A few clips:
In “Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions,” Temple Grandin identifies a continuum of thought styles that’s roughly divisible into three sections. On one end are verbal thinkers, who often solve problems by talking about them in their heads or, more generally, by proceeding in the linear, representational fashion typical of language. (Estimating the cost of a building project, a verbal thinker might price out all the components, then sum them using a spreadsheet—an ordered, symbol-based approach.) On the other end of the continuum are “object visualizers”: they come to conclusions through the use of concrete, photograph-like mental images, as Grandin does when she compares building plans in her mind. In between those poles, Grandin writes, is a second group of visual thinkers—“spatial visualizers,” who seem to combine language and image, thinking in terms of visual patterns and abstractions.
Grandin proposes imagining a church steeple. Verbal people, she finds, often make a hash of this task, conjuring something like “two vague lines in an inverted V,” almost as though they’ve never seen a steeple before. Object visualizers, by contrast, describe specific steeples that they’ve observed on actual churches: they “might as well be staring at a photograph or photorealistic drawing” in their minds. Meanwhile, the spatial visualizers picture a kind of perfect but abstract steeple—“a generic New England-style steeple, an image they piece together from churches they’ve seen.” They have noticed patterns among church steeples, and they imagine the pattern, rather than any particular instance of it.
The imagistic minds in “Visual Thinking” can seem glamorous compared with the verbal ones depicted in “Chatter: The Voice in Our Head, Why It Matters, and How to Harness It,” by Ethan Kross. Kross is interested in what’s known as the phonological loop—a neural system, consisting of an “inner ear” and an “inner voice,” that serves as a “clearinghouse for everything related to words that occurs around us in the present.” If Grandin’s visual thinkers are attending Cirque du Soleil, then Kross’s verbal thinkers are stuck at an Off Broadway one-man show. It’s just one long monologue.
People with inner monologues, Kross reports, often spend “a considerable amount of time thinking about themselves, their minds gravitating toward their own experiences, emotions, desires, and needs.” This self-centeredness can spill over into our out-loud conversation. In the nineteen-eighties, the psychologist Bernard Rimé investigated what we’d now call venting—the compulsive sharing of negative thoughts with other people. Rimé found that bad experiences can inspire not only interior rumination but the urge to broadcast it. The more we share our unhappiness with others, the more we alienate them… Maybe it can pay to keep your thoughts to yourself.
Kross’s bottom line is that our inner voices are powerful tools that must be tamed. He ends his book with several dozen techniques for controlling our chatter. He advises trying “distanced self-talk”: by using “your name and the second-person ‘you’ to refer to yourself,” he writes, you can gain more command over your thinking. You might use your inner voice to pretend that you’re advising a friend about his problems; you might redirect your thoughts toward how universal your experiences are (It’s normal to feel this way), or contemplate how every new experience is a challenge you can overcome (I have to learn to trust my partner). The idea is to manage the voice that you use for self-management. Take advantage of the suppleness of dialogue. Don’t just rehearse the same old scripts; send some notes to the writers’ room.
If we can’t say exactly how we think, then how well do we know ourselves? In an essay titled “The Self as a Center of Narrative Gravity,” the philosopher Daniel Dennett argued that a layer of fiction is woven into what it is to be human. In a sense, fiction is flawed: it’s not true. But, when we open a novel, we don’t hurl it to the ground in disgust, declaring that it’s all made-up nonsense; we understand that being made up is actually the point. Fiction, Dennett writes, has a deliberately “indeterminate” status: it’s true, but only on its own terms. The same goes for our minds. We have all sorts of inner experiences, and we live through and describe them in different ways—telling one another about our dreams, recalling our thoughts, and so on. Are our descriptions and experiences true or fictionalized? Does it matter? It’s all part of the story.

Wednesday, November 16, 2022

The neurophysiology of consciousness - neural correlates of qualia

This is a post for consciousness mavens.Tucker, Luu, and Johnson have offered a neurophyiological model of consciousness, Neurophysiological mechanisms of implicit and explicit memory in the process of consciousness. The open source article has useful summary graphics, and embraces the 'Hard Problem' of consciousness - the nature of 'qualia' (how it feels to see red, eat an apple, etc.) Here I pass on brief, and then more lengthy, paragraphs on what the authors think is new and noteworthy about their ideas.
The process of consciousness, generating the qualia that may appear to be irreducible qualities of experience, can be understood to arise from neurophysiological mechanisms of memory. Implicit memory, organized by the lemnothalamic brain stem projections and dorsal limbic consolidation in REM sleep, supports the unconscious field and the quasi-conscious fringe of current awareness. Explicit memory, organized by the collothalamic midbrain projections and ventral limbic consolidation of NREM sleep, supports the focal objects of consciousness.
Neurophysiological mechanisms are increasingly understood to constitute the foundations of human conscious experience. These include the capacity for ongoing memory, achieved through a hierarchy of reentrant cross-laminar connections across limbic, heteromodal, unimodal, and primary cortices. The neurophysiological mechanisms of consciousness also include the capacity for volitional direction of attention to the ongoing cognitive process, through a reentrant fronto-thalamo-cortical network regulation of the inhibitory thalamic reticular nucleus. More elusive is the way that discrete objects of subjective experience, such as the color of deep blue or the sound of middle C, could be generated by neural mechanisms. Explaining such ineffable qualities of subjective experience is what Chalmers has called “the hard problem of consciousness,” which has divided modern neuroscientists and philosophers alike. We propose that insight into the appearance of the hard problem can be gained through integrating classical phenomenological studies of experience with recent progress in the differential neurophysiology of consolidating explicit versus implicit memory. Although the achievement of consciousness, once it is reflected upon, becomes explicit, the underlying process of generating consciousness, through neurophysiological mechanisms, is largely implicit. Studying the neurophysiological mechanisms of adaptive implicit memory, including brain stem, limbic, and thalamic regulation of neocortical representations, may lead to a more extended phenomenological understanding of both the neurophysiological process and the subjective experience of consciousness.

Friday, February 18, 2022

Illusory faces are more likely to be perceived as male than female

Interesting observations from Wardle et al.:
Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal = 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.

Friday, October 15, 2021

The dark side of Eureka: Artificially induced Aha moments make facts feel true

Fascinating observations from Laukkonen et al:
Some ideas that we have feel mundane, but others are imbued with a sense of profundity. We propose that Aha! moments make an idea feel more true or valuable in order to aid quick and efficient decision-making, akin to a heuristic. To demonstrate where the heuristic may incur errors, we hypothesized that facts would appear more true if they were artificially accompanied by an Aha! moment elicited using an anagram task. In a preregistered experiment, we found that participants (n = 300) provided higher truth ratings for statements accompanied by solved anagrams even if the facts were false, and the effect was particularly pronounced when participants reported an Aha! experience (d = .629). Recent work suggests that feelings of insight usually accompany correct ideas. However, here we show that feelings of insight can be overgeneralized and bias how true an idea or fact appears, simply if it occurs in the temporal ‘neighbourhood’ of an Aha! moment. We raise the possibility that feelings of insight, epiphanies, and Aha! moments have a dark side, and discuss some circumstances where they may even inspire false beliefs and delusions, with potential clinical importance.

Wednesday, April 14, 2021

Useful Delusions

I want to pass on to MindBlog readers some background information on the recent book "Useful Delusions: The Power and Paradox of the Self-Deceiving Brain" by Shankar Vedantam, host of NPR’s “The Hidden Brain,” and science writer Bill Mesler. It was compiled by a member of the four person program committee of the Austin Rainbow Forum discussion group to which I belong.
This Hidden Brain podcast interview with Shankar Vedantum is a great resource for those up to the challenge of sitting in a comfortable chair for an hour listening to a great conversation while enjoying a pleasant beverage.
And here are a few alternatives for the listening challenged:
A book excerpt at the Hidden Brain website.
And, book reviews from The New York Journal of Books, and The Wall Street Journal.

Wednesday, August 14, 2019

Five myths about consciousness

In a perspective piece for the Washington Post Christof Koch (chief scientist and president of the Allen Institute of Brain Science) does a brief and concise debunking of five common fables about consciousness. I suggest you give it a read. The myths are:
Humans have a unique brain.
Science will never understand consciousness.
Dreams contain hidden clues about our secret desires.
We are susceptible to subliminal messages.
Near-death 'visions' are evidence of life after death.

Wednesday, June 26, 2019

Implicit racial bias is preserved by historical roots of social environments.

Payne et al. note that geographic differences in implicit racial bias correlate with the number of slaves in those areas in 1860.

Significance
Geographic variation in implicit bias is associated with multiple racial disparities in life outcomes. We investigated the historical roots of geographical differences in implicit bias by comparing average levels of implicit bias with the number of slaves in those areas in 1860. Counties and states more dependent on slavery in 1860 displayed higher pro-White implicit bias today among White residents and less pro-White bias among Black residents. Mediation analyses suggest that historical oppression may be transmitted into contemporary biases through structural inequalities, including disparities in poverty and upward mobility. Given the importance of contextual factors, efforts to reduce unintended discrimination might focus on modifying social environments that cue implicit biases in the minds of individuals.
Abstract
Implicit racial bias remains widespread, even among individuals who explicitly reject prejudice. One reason for the persistence of implicit bias may be that it is maintained through structural and historical inequalities that change slowly. We investigated the historical persistence of implicit bias by comparing modern implicit bias with the proportion of the population enslaved in those counties in 1860. Counties and states more dependent on slavery before the Civil War displayed higher levels of pro-White implicit bias today among White residents and less pro-White bias among Black residents. These associations remained significant after controlling for explicit bias. The association between slave populations and implicit bias was partially explained by measures of structural inequalities. Our results support an interpretation of implicit bias as the cognitive residue of past and present structural inequalities.

Thursday, January 17, 2019

Upstairs/ Downstairs in our Brain - Who (or what) is running our show?


I want to pass on to MindBlog readers the lecture notes and slides from a talk I gave yesterday to "NOVA" - one of five senior learning programs hosted by the Osher Lifelong Learning Institute (OLLI) at the Univ. of Texas. The talk has the title of this post. Here is a final summary slide from the talk:


Upstairs/Downstairs - Who or what is running our show?

I.  What is happening as our “I” acts and senses in the world?

  A. Our subjective “I” is predictions that are late to sensing and acting.
  B.  What we experience is our prediction of what is out there, or what the sensory consequence of our actions will be
  C. We can place our experienced body inside or outside our actual one.
  D. The “I” or self that we experience is an illusion, a virtual avatar in our brain.

II.  What behaviors are coming from upstairs and downstairs?

   A. Downstairs dominates rapid actions and judgements.
   B. Upstairs modulates this with slower reasoned responses.
   C. Reasons and emotions cause each other.
   D. Different personality types have different upstairs profiles.

III.  What is happening in paying attention versus mind wandering?

  A. Mind wandering is a transient loss of mental autonomy.
  B. Mind wandering and default mode networks stabilize our self model. 
  C. Mind wandering facilitates creative incubation.
  D. A wandering mind can be an unhappy mind.

IV. How might we observe and influence what our brains are doing?

  A.  Attention can be trained by meditation-like activities
  B.  Attention training, like training for other skills, causes brain changes. 
  C.  Attention training can allow more autonomy in choosing actions and emotions, making us more pilot than passenger of our ship.  

Friday, February 02, 2018

Mental autonomy - developing a ‘culture of consciousness’.

One of MindBlog's threads has been presentation and discussion of work on the default mode network of our brains that mediates our mind wandering. One of my heroes, Thomas Metzinger, has done a nice essay on the larger implications of what we have learned. I strongly recommend that you read the whole piece, but will also pass on a rather extensive series of clips that convey the main points:
When traveling long distances, jumping saves dolphins energy, because there’s less friction in the air than in the water below. It also seems to be an efficient way to move rapidly and breathe at the same time…These cetacean acrobatics are a fruitful metaphor for what happens when we think. What most of us still call ‘our conscious thoughts’ are really like dolphins in our mind, jumping briefly out of the ocean of our unconscious for a short period before they submerge themselves once again. This ‘dolphin model of cognition’ helps us to understand the limits of our awareness.
One of the most exciting recent research fields in neuroscience and experimental psychology is mind-wandering – the study of spontaneous or task-unrelated thoughts….Much of the time we like to describe some foundational ‘self’ as the initiator or cause of our actions, but this is a pervasive myth. In fact, we only resemble something like this for about a third of our conscious lifetime. ..As far as our inner life is concerned, the science of mind-wandering implies that we’re only rarely autonomous persons. 
As the dolphin story hints, human beings are not Cartesian egos capable of complete self-determination. Nor are we primitive, robotic automata. Instead, our conscious inner life seems to be about the management of spontaneously emerging mental behaviour. Most of what populates our awareness unfolds automatically, just like a heartbeat or autoimmune response, but it can still be guided to a greater or lesser degree. 
We ought to probe how our organism turns different sub-personal events into thoughts or states that appear to belong to ‘us’ as a whole, and how we can learn to control them more effectively and efficiently. This capacity creates what I call mental autonomy, and I believe it is the neglected ethical responsibility of government and society to help citizens cultivate it.
The mind wanders more frequently than most of us think – several hundred times a day and up to 50 per cent of our waking life, in fact…The wandering mind is like a monkey, swinging from branch to branch across an inner emotional landscape. It will flee from unpleasant perceptions and feelings, and try to reach a state that feels better. If the present moment is unattractive or boring, then of course it’s more pleasant to be planning the next holiday or drifting away into a romantic fantasy.
A multitude of empirical studies show that areas of our brain responsible for the wandering mind overlap to a large extent with the so-called default-mode network (DMN). This is a large network in our brain that typically becomes active during periods of rest, when attention is directed to the inside.
My view is that the mind-wandering network and the DMN basically serve to keep our sense of self stable and in good shape. Like an automatic maintenance program, they constantly generate new stories, weaving back and forth between different time-horizons, each micro-narrative contributing to the illusion that we are actually the same person over time. Like nocturnal dreaming, mind-wandering also appears to be a process by which our brain and body consolidate our long-term memory and stabilise specific parts of what I call the ‘self-model’. 
At its most basic, this self-model is based on an internal model of the body, including affective and emotional states, and grounded in inner-body perceptions such as gut feelings, heartbeat, breath, hunger or thirst. On another, higher layer, the self-model reflects a person’s relationships to other people, ethical and cultural norms, and sense of self-worth. But in order to create a robust connection between the social and biological levels, the self-model fosters the illusion of transtemporal identity – the belief that we are a whole and persisting entity based on the narrative our brain tells itself about ‘our’ past, present and future. (I think that it was exactly the impression of transtemporal identity that turned into one of the central factors in the emergence of large human societies, which rely on the understanding that it is I who will be punished or rewarded in the future. Only as long as we believe in our own continuing identity does it make sense for us to treat our fellow human beings fairly, for the consequences of our actions will, in the end, always concern us.)
But don’t lose sight of the fact that all this modelling is just a convenient trick our organism plays on itself to enhance its chances of survival. We must not forget that the phenomenal realm (how we subjectively experience ourselves) is only a small part of the neurobiological one (the reality of the creatures we actually are). There’s no little person in our head, only a set of dynamical, self-organising processes at play behind the scenes. Yet it seems like these processes often function by creating self-fulfilling prophecies; in other words, we have an identity because we convince ourselves we have one. Humans have evolved to be a bit like method actors, who need to really imagine and believe they are a particular character in order to perform effectively on stage. But just as there is no ‘real’ character, there’s also no such thing as ‘a self’, and probably nothing like an immortal soul either. 
…one of the main functions of the self-model is how it lets our biological organism predict, and thereby control, the sensory consequences of our actions. That produces what’s called our sense of agency. ..when I close my fingers around the stem of a wineglass or feel the rough surface of a tennis ball in my hand, I infer that I must be an agent who is capable of originating, controlling and owning all these events.
..just like a method actor can’t focus on the fact that she’s acting, our biological organism is usually unable to experience our self-model as a model. Instead, we tend to identify with its content, just as the actor identifies with the character. The more we achieve a high degree of predictability over our behaviour, the more tempting it is to say: this is me, and I did this. We tell ourselves a brilliant and parsimonious causal story, even if it’s false from the third-person perspective of science. Empirically speaking, the self-as-agent is just a useful fiction or hypothesis, a neurocomputational artefact of our evolved self-model.
On the level of the brain, this process is a truly amazing affair, and a major achievement of evolution. But if we look at the resulting conscious experience from the outside, and on the level of the whole person, the brain’s mini-narrative also appears as a misrepresentation, slightly complacent, a bit grandiose, and ultimately delusional. Agency on the level of thought is really a ‘surface’ phenomenon, produced by the fact that the underwater, unconscious causal precursors are simply unknown to us. Even if we sometimes reach what resembles the rationalist ideal, we probably do so only sporadically, and the notion of controlled, effortful thinking is probably a very bad model of conscious thought in general. Our conscious mental activity is usually an unbidden, unintentional form of behaviour. Yet somehow the tourist on the prow begins to experience herself as an omnipotent magician, making dolphins come into existence out of the blue, and jump at her command.
The self might not be a Cartesian agent that causes thought or action, but perhaps there are other ways for the organism as a whole to shape what happens in its mental life. We can’t get off the ship, let alone summon dolphins from nowhere, but perhaps we can choose where to look. 
We’re familiar with the idea of autonomy over our actions in the outer realm, such as when we control our bodily movements…however, there are not only bodily actions, but also mental ones… actively re-directing attention to your breath in meditation, deliberately paying attention to a person’s face in front of you, trying to retrieve visual images from your memory, logical thinking, or engaging in mental calculation. Note that deliberately not acting is as important here as acting. The defining feature of autonomy in both the inner and outer realms is veto control, the power to inhibit, suspend or terminate ongoing actions.  A specific layer of the self-model is of central importance here. I call it the ‘epistemic agent model’ – the bit that allows us to have the feeling ‘I am a knowing self; I know that I know.’ This is the true origin of our first-person perspective. It’s created by predictions about what the organism can and will know in the future, and helps us to continuously improve our model of reality.
Now we can see mind-wandering for what it really is: a transient loss of mental autonomy, via the loss of the epistemic-agent model. A daydream just happens to you – there is ownership, but no control over the event. It is not something you do, but something in which you ‘lose yourself’. You have forgotten a specific kind of self-knowledge, the ability to terminate a train of thought and to choose what it is you want to know. You might daydream about being a knowing self, but right now you have lost all awareness of your own power to put an end to the process.
Meditation research is poised to make major contributions to mental autonomy. Mindfulness practice can sometimes lead to a crystal-clear and silent mind that is not clouded by thoughts at all, the pure conscious experience of mental autonomy as such that arises without actually exerting control. In long-term practitioners, this can result from the cultivation of a kind of inner non-acting that includes noticing, gently letting go, and resting in an open, effortless state of choiceless awareness. However, in the beginning, meditation clearly involves making decisions, as subjects develop meta-awareness, alongside an awareness of their capacity for attentional control. This can be seen as a systematic form of ‘experience sampling’.
Whether this sort of cognition really requires a robust notion of selfhood, as most Western philosophers would argue, would be disputed in many Eastern traditions. Here the highest level of mental autonomy is often seen as a form of impersonal witnessing or (in the words of the Indian-born philosopher Jiddu Krishnamurti) ‘observing without an observer’ (though even this pure form of global meta-awareness still contains the implicit knowledge that the organism could act if necessary). There seems to be a middle way: perhaps mental autonomy can actually be experienced as such, in a non-agentive way, as a mere capacity. The notion of ‘mental autonomy’ could therefore be a deep point of contact where Eastern and Western philosophy discover common conceptual ground.
It’s important to remember that neuroscience isn’t the only piece of the puzzle. Culture plays its part, too…Accountability and ethical responsibility might actually be implemented in the human brain from above, via early social interactions between children and adults. If we tell children at an early age that they are fully responsible for their own actions, and if we accordingly punish and reward them, then this assumption will get built into their conscious self-model...The human adult’s conscious model of the ‘self’ might therefore be an enculturated post-hoc confabulation, at least in part – a causal-inference illusion that’s become part of how we model our own sociocultural niche, ultimately based on how we’ve internalised social interactions and ingrained language games.
…the mind-wandering network does not, I believe, actually produce thoughts. It also is not conscious – the person as a whole is. Rather, it creates what I would describe as cognitive affordances, opportunities for inner action. In the theory of psychology developed by J J Gibson, what we perceive in our environment aren’t simply objects, but possible actions: this is something I could sit on, this is something I could put into my mouth, and so on. Cognitive affordances are possible mental actions, and they are not perceived with our sensory organs but they are available for introspection.
Cognitive affordances are actually precursors of thoughts, or proto-thoughts, that call out ‘Think me!’ or ‘Don’t miss me – I am the last of my kind!’ Our inner landscape is full of these possibilities, which we must constantly navigate. What mind-wandering does is create a fluid and highly dynamic task-domain. 
One central function of mind-wandering, then, could be to provide us with an internal environment of competing affordances, accompanied by possible mental actions, which have the potential to become an extended process of controlling the content of your own mind. This inner landscape could even be below our conscious awareness, but it is out of this that the epistemic-agent model emerges, like any other conscious experience, seemingly selecting what she wants to know and what she wants to ignore…true autonomy is about different levels of context-sensitivity and supple self-control.  
What is clear by now is that our societies lack systematic and institutionalised ways of enhancing citizens’ mental autonomy. This is a neglected duty of care on the part of governments. There can be no politically mature citizens without a sufficient degree of mental autonomy, but society as a whole does not act to protect or increase it. Yet, it might be the most precious resource of all. In the end, and in the face of serious existential risks posed by environmental degradation and advanced capitalism, we must understand that citizens’ collective level of mental autonomy will be the decisive factor.
It was William James, the father of American psychology, who said in 1892: ‘And the faculty of voluntarily bringing back a wandering attention over and over again is the very root of judgment, character, and will. […] And education which should improve this faculty would be the education par excellence.’ We can finally see more clearly what meditation is really about: over the centuries, the main goal has always been a sustained enhancement of one’s mental autonomy.
Mental autonomy brings together the core ideas of both Eastern and Western philosophy. It helps us see the value of both secularised spiritual practice and of rigorous, rational thought. There seem to be two complementary ways to understand the dolphins in our own mind: one, from the point of view of a truly hard-nosed, scientifically minded tourist on the prow of the boat; and two, from the perspective of the wide-open sky, silently looking down from above at the tourist and the dolphins porpoising in the ocean.

Wednesday, August 30, 2017

An essay on the real problem of consciousness.

For those of you who are consciousness mavens, I would recommend having a glance at Anil Seth’s essay, which does a clear headed description of some current ideas about what consciousness is. He summarizes the model of consciousness as an ensemble of predictive perceptions. Clips from his essay:
The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).
...instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.
More recently, in my lab, we’ve been probing the predictive mechanisms of conscious perception in more detail. In several experiments...we’ve found that people consciously see what they expect, rather than what violates their expectations. We’ve also discovered that the brain imposes its perceptual predictions at preferred points (or phases) within the so-called ‘alpha rhythm’, which is an oscillation in the EEG signal at about 10 Hz that is especially prominent over the visual areas of the brain. This is exciting because it gives us a glimpse of how the brain might actually implement something like predictive perception, and because it sheds new light on a well-known phenomenon of brain activity, the alpha rhythm, whose function so far has remained elusive.

Tuesday, August 22, 2017

Race based biases in deception judgements.

From Lloyd et al.:
In six studies (N = 605), participants made deception judgments about videos of Black and White targets who told truths and lies about interpersonal relationships. White participants judged that Black targets were telling the truth more often than they judged that White targets were telling the truth. This truth bias was predicted by Whites’ motivation to respond without prejudice. For Black participants, however, motives to respond without prejudice did not moderate responses. We found similar effects with a manipulation of the targets’ apparent race. Finally, we used eye-tracking techniques to demonstrate that Whites’ truth bias for Black targets is likely the result of late-stage correction processes: Despite ultimately judging that Black targets were telling the truth more often than White targets, Whites were faster to fixate on the on-screen “lie” response box when targets were Black than when targets were White. These systematic race-based biases have important theoretical implications (e.g., for lie detection and improving intergroup communication and relations) and practical implications (e.g., for reducing racial bias in law enforcement).