Wednesday, May 01, 2024

How blue light regulates the body, brain, and immune system

Here is the abstract from a PNAS perspectives article by Slominski et al.  with the title "Photo-neuro-immuno-endocrinology: How the ultraviolet radiation regulates the body, brain, and immune system." MindBlog readers can request a PDF of the article from me.  


Ultraviolet radiation (UVR) is primarily recognized for its detrimental effects such as cancerogenesis, skin aging, eye damage, and autoimmune disorders. With exception of ultraviolet B (UVB) requirement in the production of vitamin D3, the positive role of UVR in modulation of homeostasis is underappreciated. Skin exposure to UVR triggers local responses secondary to the induction of chemical, hormonal, immune, and neural signals that are defined by the chromophores and extent of UVR penetration into skin compartments. These responses are not random and are coordinated by the cutaneous neuro-immuno-endocrine system, which counteracts the action of external stressors and accommodates local homeostasis to the changing environment. The UVR induces electrical, chemical, and biological signals to be sent to the brain, endocrine and immune systems, as well as other central organs, which in concert regulate body homeostasis. To achieve its central homeostatic goal, the UVR-induced signals are precisely computed locally with transmission through nerves or humoral signals release into the circulation to activate and/or modulate coordinating central centers or organs. Such modulatory effects will be dependent on UVA and UVB wavelengths. This leads to immunosuppression, the activation of brain and endocrine coordinating centers, and the modification of different organ functions. Therefore, it is imperative to understand the underlying mechanisms of UVR electromagnetic energy penetration deep into the body, with its impact on the brain and internal organs. Photo-neuro-immuno-endocrinology can offer novel therapeutic approaches in addiction and mood disorders; autoimmune, neurodegenerative, and chronic pain-generating disorders; or pathologies involving endocrine, cardiovascular, gastrointestinal, or reproductive systems.

Monday, April 29, 2024

An expanded view of human minds and their reality.

I want to pass on this recent essay by Venkatesh Rao in its entirety, because it has changed my mind on agreeing with Daniel Dennett that the “Hard Problem” of consciousness is a fabrication that doesn’t actually exist. There are so many interesting ideas in this essay that I will be returning to it frequently in the future.  

We Are All Dennettians Now

An homage riff on AI+mind+evolution in honor of Daniel Dennett

The philosopher Daniel Dennett (1942-2024) died last week. Dennett’s contributions to the 1981 book he co-edited with Douglas Hofstadter, The Mind’s I,¹ which I read in 1996 (rather appropriately while doing an undergrad internship at the Center for AI and Robotics in Bangalore), helped shape a lot of my early philosophical development. A few years later (around 1999 I think), I closely read his trollishly titled 1991 magnum opus, Consciousness Explained (alongside Steven Pinker’s similar volume How the Mind Works), and that ended up shaping a lot of my development as an engineer. Consciousness Explained is effectively a detailed neuro-realistic speculative engineering model of the architecture of the brain in a pseudo-code like idiom. I stopped following his work closely at that point, since my tastes took me in other directions, but I did take care to keep him on my radar loosely.

So in his honor, I’d like to (rather chaotically) riff on the interplay of the three big topics that form the through-lines of his life and work: AI, the philosophy of mind, and Darwinism. Long before we all turned into philosophers of AI overnight with the launch of ChatGPT, he defined what that even means.

When I say Dennett’s views shaped mine, I don’t mean I necessarily agreed with them. Arguably, your early philosophical development is not shaped by discovering thinkers you agree with. That’s for later-life refinements (Hannah Arendt, whom I first read only a few years ago, is probably the most influential agree-with philosopher for me). Your early development is shaped by discovering philosophers you disagree with.

But any old disagreement will not shape your thinking. I read Ayn Rand too (if you want to generously call her a philosopher) around the same time I discovered Dennett, and while I disagreed with her too, she basically had no effect on my thinking. I found her work to be too puerile to argue with. But Dennett — disagreeing with him forced me to grow, because it took serious work over years to decades — some of it still ongoing — to figure out how and why I disagreed. It was philosophical weight training. The work of disagreeing with Dennett led me to other contemporary philosophers of mind like David Chalmers and Ned Block, and various other more esoteric bunnytrails. This was all a quarter century ago, but by the time I exited what I think of as the path-dependent phase of my philosophical development circa 2003, my thinking bore indelible imprints of Dennett’s influence.

I think Dennett was right about nearly all the details of everything he touched, and also right (and more crucially, tasteful) in his choices of details to focus on as being illuminating and significant. This is why he was able to provide elegant philosophical accounts of various kinds of phenomenology that elevated the corresponding discourses in AI, psychology, neuroscience, and biology. His work made him a sort of patron philosopher of a variety of youngish scientific disciplines that lacked robust philosophical traditions of their own. It also made him a vastly more relevant philosopher than most of his peers in the philosophy world, who tend, through some mix of insecurity, lack of courage, and illiteracy, to stay away from the dirty details of technological modernity in their philosophizing (and therefore cut rather sorry figures when they attempt to weigh in on philosophy-of-technology issues with cartoon thought experiments about trolleys or drowning children). Even the few who came close, like John Searle, could rarely match Dennett’s mastery of vast oceans of modern techno-phenomenological detail, even if they tended to do better with clever thought experiments. As far as I am aware, Dennett has no clever but misleading Chinese Rooms or Trolley Problems to his credit, which to my mind makes him a superior rather than inferior philosopher.

I suspect he paid a cost for his wide-ranging, ecumenical curiosities in his home discipline. Academic philosophers like to speak in a precise code about the simplest possible things, to say what they believe to be the most robust things they can. Dennett on the other hand talked in common language about the most complex things the human mind has ever attempted to grasp. The fact that he got his hands (and mind!) dirty with vast amounts of technical detail, and dealt in facts with short half-lives from fast-evolving fields, and wrote in a style accessible to any intelligent reader willing to pay attention, made him barely recognizable as a philosopher at all. But despite the cosmetic similarities, it would be a serious mistake to class him with science popularizers or TED/television scientists with a flair for spectacle at the expense of substance.

Though he had a habit of being uncannily right about a lot of the details, I believe Dennett was almost certainly wrong about a few critical fundamental things. We’ll get to what and why later, but the big point to acknowledge is that if he was indeed wrong (and to his credit, I am not yet 100% sure he was), he was wrong in ways that forced even his opponents to elevate their games. He was as much a patron philosopher (or troll or bugbear) to his philosophical rivals as to the scientists of the fields he adopted. You could not even be an opponent of Dennett except in Dennettian ways. To disagree with the premises of Strong AI or Dennett’s theory of mind is to disagree in Dennettian ways.

If I were to caricature how I fit in the Dennettian universe, I suspect I’d be closest to what he called a “mysterian” (though I don’t think the term originated with him). Despite mysterian being something of a dismissive slur, it does point squarely at the core of why his opponents disagree with him, and the parts of their philosophies they must work to harden and make rigorous, to withstand the acid forces of the peculiarly Dennetian mode of scrutiny I want to talk about here.

So to adapt the line used by Milton Friedman to describe Keynes: We are all Dennettians now.

Let’s try and unpack what that means.


As I said, in Dennettian terms, I am a “mysterian.” At a big commitments level, mysterianism is the polar opposite of the position Dennett consistently argued across his work, a version of what we generally call a “Strong AI” position. But at the detailed level, there are no serious disagreements. Mysterians and Strong AI people agree about most of the details of how the mind works. They just put the overall picture together differently because mysterians want to accommodate certain currently mysterious things that Strong AI people typically reject as either meaningless noise or shallow confusions/illusions.

Dennett’s version of Strong AI was both more robustly constructed than the sophomoric versions one typically encounters, and more broadly applied: beyond AI to human brains and seemingly intelligent processes like evolution. Most importantly, it was actually interesting. Reading his accounts of minds and computers, you do not come away with the vague suspicion of a non-neurotypical succumbing to the typical-mind fallacy and describing the inner life of a robot or philosophical zombie as “truth.” From his writing, it sounds like he had a fairly typical inner-life experience, so why did he seem to deny the apparent ineffable essence of it? Why didn’t he try to eff that essence the way Chalmers, for instance, does? Why did he seemingly dismiss it as irrelevant, unreal, or both?

To be a mysterian in Dennettian terms is to take ineffable, vitalist essences seriously. With AIs and minds, it means taking the hard problem of consciousness seriously. With evolution, it means believing that Darwinism is not the whole story. Dennett tended to use the term as a dismissive slur, but many, (re)claim it as a term of approbation, and I count myself among them.

To be a rigorous mysterian, as opposed to one of the sillier sorts Dennett liked to stoop to conquer (naive dualists, intelligent-designers, theological literalists, overconfident mystics…), you have to take vitalist essences “seriously but not literally.” My version of doing that is to treat my vitalist inclinations as placeholder pointers to things that lurk in the dank, ungrokked margins of the thinkable, just beyond the reach of my conceptualizing mind. Things I suspect exist by the vague shapes of the barely sensed holes they leave in my ideas. In pursuit of such things, I happily traffic in literary probing of Labatutian/Lovecraftian/Ballardian varieties, self-consciously magical thinking, junk from various pre-paradigmatic alchemical thought spaces, constructs that uncannily resemble astrology, and so on. I suppose it’s a sort of intuitive-ironic cognitive kayfabe for the most part, but it’s not entirely so.

So for example, when I talk of elan vital, as I frequently do in this newsletter, I don’t mean to imply I believe in some sort of magical fluid flowing through living things or a Gaian planetary consciousness. Nor do I mean the sort of overwrought continental metaphysics of time and subjectivity associated with Henri Bergson (which made him the darling of modernist literary types and an object of contempt to Einstein). I simply mean I suspect there are invisible things going on in the experience and phenomenology of life that are currently beyond my ability to see, model, and talk about using recognizably rational concepts, and I’d rather talk about them as best I can with irrational concepts than pretend they don’t exist.

Or to take another example, when I say that “Darwin is not the whole story,” I don’t mean I believe in intelligent design or a creator god (I’m at least as strong an atheist as Dennett was). I mean that Darwinian principles of evolution constrain but do not determine the nature of nature, and we don’t yet fully grok what completes the picture except perhaps in hand-wavy magical-thinking ways. To fully determine what happens, you need to add more elements. For example, you can add ideas like those of Stuart Kauffman and other complexity theorists. You could add elements of what Maturana and Varela called autopoiesis. Or it might be none of these candidate hole-filling ideas, but something to be dreamt up years in the future. Or never. But just because there are only unsatisfactory candidate ways for talking about stuff doesn’t mean you should conclude the stuff doesn’t exist.

In all such cases, there are more things present in phenomenology I can access than I can talk about using terms of reference that would be considered legitimate by everybody. This is neither known-unknowns (which are holes with shapes defined by concepts that seem rational), nor unknown-unknown (which have not yet appeared in your senses, and therefore, to apply a Gilbert Ryle principle, cannot be in your mind).

These are things that we might call magically known. Like chemistry was magically known through alchemy. For phenomenology to be worth magically knowing, the way-of-knowing must offer interesting agency, even if it doesn’t hang together conceptually.

Dennett seemed to mostly fiercely resist and reject such impulses. He genuinely seemed to think that belief in (say) the hard problem of consciousness was some sort of semantic confusion. Unlike say B. F. Skinner, whom critics accused of only pretending to not believe in inner processes, Dennett seemed to actually disbelieve in them.

Dennett seemed to disregard a cousin to the principle that absence of evidence is not evidence of absence: Presence of magical conceptualizations does not mean absence of phenomenology. A bad pointer does not disprove the existence of what it points to. This sort of error is easy to avoid in most cases. Lightning is obviously real even if some people seem to account for it in terms of Indra wielding his vajra. But when we try to talk of things that are on the phenomenological margins, barely within the grasp of sensory awareness, or worse, potentially exist as incommunicable but universal subjective phenomenology (such as the experience of the color “blue”), things get tricky.

Dennett was a successor of sorts to philosophers like Gilbert Ryle, and psychologists like B. F. Skinner. In evolutionary philosophy, his thinking aligned with people like Richard Dawkins and Steven Pinker, and against Noam Chomsky (often classified as a mysterian, though I think the unreasonable effectiveness of LLMs kinda vindicates to a degree Chomsky’s notions of an ineffable more-than-Darwin essence around universal grammars that we don’t yet understand).

I personally find it interesting to poke at why Dennett took the positions he took, given that he was contemplating the same phenomenological data and low-to-mid-level conceptual categories as the rest of us (indeed, he supplied much of it for the rest of us). One way to get at it is to ask: Was Dennett a phenomenologist? Are the limits of his ideas are the limits of phenomenology?

I think the answers are yes and yes, but he wasn’t a traditional sort of phenomenologist, and he didn’t encounter the more familiar sorts of limits.

The Limits of Phenomenology

Let’s talk regular phenomenology first, before tackling what I think was Dennett’s version.

I think of phenomenology, as a working philosophical method, to be something like a conceited form of empiricism that aims to get away from any kind of conceptually mediated seeing.

When you begin to inquire into a complex question with any sort of fundamentally empirical approach, your philosophy can only be as good as a) the things you know now through your (potentially technologically augmented) senses and b) the ways in which you know those things.

The conceit of phenomenology begins with trying to “unknow” what is known to be known, and contemplate the resulting presumed “pure” experiences “directly.” There are various flavors of this: Husserlian bracketing in the Western tradition, Zen-like “beginner mind” practices, Vipassana style recursive examination of mental experiences, and so on. Some flavors apply only to sense-observations of external phenomena, others apply only to subjective introspection, and some apply to both. Given the current somewhat faddish uptick in Eastern-flavored disciplines of interiority, it is important to take note of the fact that the phenomenological attitude is not necessarily inward-oriented. For example, the 19th century quest to measure a tenth of a second, and factor out the “human equation” in astronomical observations, was a massive project in Western phenomenology. The abstract thought experiments with notional clocks in the theory of relativity began with the phenomenology of real clocks.

In “doing” phenomenology, you are assuming that you know what you know relatively completely (or can come to know it), and have a reliable procedure for either unknowing it, or systematically alloying it with skeptical doubt, to destabilize unreliable perceptions it might be contributing to. Such destabilizability of your default, familiar way of knowing, in pursuit of a more-perfect unknowing, is in many ways the essence of rationality and objectivity. It is the (usually undeclared) starting posture for doing “science,” among other things.

Crucially, for our purposes in this essay, you do not make a careful distinction between things you know in a rational way and things you know in a magical or mysterian way, but effectively assume that only the former matter; that the latter can be trivially brushed aside as noise signifying nothing that needs unknowing. I think the reverse is true. It is harder, to the point of near impossible, to root out magical ideas from your perceptions, and they signify the most important things you know. More to the point, it is not clear that trying to unknow things, especially magical things, is in fact a good idea, or that unknowing is clarifying rather than blinding. But phenomenology is committed to trying. This has consequences for “phenomenological projects” of any sort, be they Husserlian or Theravadan in spirit.

A relatively crude example: “life” becomes much less ineffable (and depending on your standards, possibly entirely drained of mystery) once you view it through the lens of DNA. Not only do you see new things through new tools, you see phenomenology you could already see, such as Mendelian inheritance, in a fundamentally different way that feels phenomenologically “deeper” when in fact it relies on more conceptual scaffolding, more things that are invisible to most of us, and requires instruments with elaborate theories attached to even render intelligible. You do not see “ATCG” sequences when contemplating a pea flower. You could retreat up the toolchain and turn your attention to how instruments construct the “idea of DNA” but to me that feels like a usually futile yak shave. The better thing to do is ask why a more indirect way of knowing somehow seems to perceive more clearly than more direct ways.

It is obviously hard to “unsee” knowledge of DNA today when contemplating the nature of life. But it would have been even harder to recognize that something “DNA shaped” was missing in say 1850, regardless of your phenomenological skills, by unknowing things you knew then. In fact, clearing away magical ways of knowing might have swept away critical clues.

To become aware, as Mendel did, that there was a hidden order to inheritance in pea flowers, takes a leap of imagination that cannot be purely phenomenological. To suspect in 1943, as Schrodinger did, the existence of “aperiodic covalent bonded crystals” at the root of life, and point the way to DNA, takes a blend of seeing and knowing that is greater than either. Magical knowing is pathfinder-knowing that connects what we know and can see to what we could know and see. It is the bootstrapping process of the mind.

Mendel and Schrodinger “saw” DNA before it was discovered, in terms of reference that would have been considered “rational” in their own time, but this has not always been the case. Newton, famously, had a lot of magical thinking going on in his successful quest for a theory of gravity. Kepler was a numerologist. Leibniz was ball of mad ideas. One of Newton’s successful bits of thinking, the idea of “particles” of light, which faced off against Huygens’ “waves,” has still not exited the magical realm. The jury is still out in our time about whether quantized fields are phenomenologically “real” or merely a convenient mnemonic-metaphoric motif for some unexpected structure in some unreasonably effective math.

Arguably, none of these thinkers was a phenomenologist, though all had a disciplined empirical streak in their thinking. The history of their ideas suggests that phenomenology is no panacea for philosophical troubles with unruly conceptual universes that refuse to be meekly and rationally “bracketed” away. There is no systematic and magic-free way to march from current truths to better ones via phenomenological disciplines.

The fatal conceit of naive phenomenology (which Paul Feyerabend spotted) is the idea that there is privileged reliable (or meta-reliable) “technique” of relating to your sense experiences, independent of the concepts you hold, whether that “technique” is Husserlian bracketing or vipassana. Understood this way, theories of reality are not that different from physical instruments that extend our senses. Experiment and theory don’t always expose each other’s weaknesses. Sometimes they mutually reinforce them.

In fact, I would go so far as to suggest—and I suspect Dennett would have agreed—that there is no such thing as phenomenology per se. All we ever see is the most invisible of our theories (rational and magical), projected via our senses and instruments (which shape, and are shaped by, those theories), onto the seemingly underdetermined aspects of the universe. There are only incomplete ways of knowing and seeing within which ideas and experiences are inextricably intertwined. No phenomenological method can consistently outperform methodological anarchy.

To deny this is to be a traditional phenomenologist, striving to procedurally separate the realm of ideas and concepts from the realm of putatively unfactored and “directly perceived” (a favorite phrase of meditators) “real” experiences.

Husserlian bracketing — “suspending trust in the objectivity of the world” — is fine in theory, but not so easy in practice. How do you know that you’re setting aside preconceived notions, judgments, and biases and attending to a phenomenon as it truly is? How do you set aside the unconscious “theory” that the Sun revolves around the Earth, and open your mind to the possibility that it’s the other way around? How do you “see” DNA-shaped holes in current ways of seeing, especially if they currently manifest as strange demons that you might sweep away in a spasm of over-eager epistemic hygiene? How do you relate, as a phenomenologist, to intrinsically conceptual things like electrons and positrons that only exist behind layers of mathematics describing experimental data processed through layers of instrumentation conceived by existing theories? If you can’t check the math yourself, how can you trust that the light bulb turning on is powered by those “electrons” tracing arcs through cloud chambers?

In practice, we know how such shifts actually came about. Not because philosophers meditated dispassionately on the “phenomenology” with free minds seeing reality as it “truly is,” but because astronomers and biologists with heads full of weird magical notions looked through telescopes and microscopes, maintained careful notes of detailed measurements, informed by those weird magical theories, and tried to account for discrepancies. Tycho Brahe, for instance, who provided the data that dethroned Ptolemy, believed in some sort of Frankenstein helio-geo-centric Ptolemy++ theory. Instead of explaining the discrepancies, as Kepler did later, Brahe attempted to explain them away using terms of reference he was attached to. He failed to resolve the tension. But he paved the way to Kepler resolving that particular tension (who of course introduced new ones, while lost in his own magical thinking about platonic solids). Formally phenomenological postures were not just absent from the story, but would have arguably retarded it by being too methodologically conservative.

Phenomenology, in other words, is something of a procedural conceit. An uncritical trust in self-certifying ways of seeing based entirely on how compelling they seem to the seer. The self-certification follows some sort of seemingly rational procedure (which might be mystical but still rational in the sense of being coherent and disciplined and internally consistent) but ultimately derives its authority from the intuitive certainties and suspicions of the perceiving subject. Phenomenological procedures are a kind of rule-by-law for governing sense experience in a laissez-faire way, rather than the “objective” rule-of-law they are often presented as. Phenomenology is to empiricism as “socialism with Chinese characteristics” is to liberal democracy.

This is not to say phenomenology is hopelessly unreliable or useless. All methodologies have their conceits, which manifest as blindspots. With phenomenology, the blindspot manifests as an insistence on non-magicality. The phenomenologist fiercely rejects the Cartesian theater and the varied ghosts-in-machines that dance there. The meditator insists he is “directly perceiving” reality in a reproducible way, no magic necessary. I do not doubt that these convictions are utterly compelling to those who hold them; as compelling as the incommunicable reality of perceiving “blue” is to everybody. I have no particular argument with such insistence. What I actually have a problem with is the delegitimization of magical thinking in the process, which I suspect to be essential for progress.

My own solution is to simply add magical thinking back into the picture for my own use, without attempting to defend that choice, and accepting the consequences.

For example, I take Myers-Briggs and the Enneagram seriously (but not literally!). I believe in the hard problem of consciousness, and therefore think “upload” and “simulationism” ideas are not-even-wrong. I don’t believe in Gods or AGIs, and therefore don’t see the point of Pascal’s wager type contortions to avoid heaven/hell or future-simulated-torture scenarios. In each case my commitments rely on chains of thought that are at least partly magical thinking, and decidedly non-phenomenological, which has various social consequences in various places. I don’t attempt to justify any of it because I think all schemes of justification, whether they are labeled “science” or something else, rest on traditional phenomenology and its limits.

Does this mean solipsism is the best we can hope for? This is where we get back to Dennett.

Dennett, to his credit, I don’t think he was a traditional phenomenologist, and he mostly avoided all the traps I’ve pointed out, including the trap of solipsism. Nor was he what one might call a “phenomenologist of language” like most modern analytical philosophers in the West. He was much too interested in technological modernity (and the limits of thought it has been exposing for a century) to be content with such a shrinking, traditionalist philosophical range.

But he was a phenomenologist in the broader sense of rejecting the possible reality of things that currently lack coherent non-magical modes of apprehension.

So how did he operate if not in traditional phenomenological ways?

Demiurge Phenomenology

I believe Dennett was what we might call a demiurge phenomenologist, which is a sort of late modernist version of traditional phenomenology. It will take a bit of work to explain what I mean by that.

I can’t recall if he ever said something like this (I’m definitely not a completist with his work and have only read a fraction of his voluminous output), but I suspect Dennett believed that the human experience of “mind” is itself subject to evolutionary processes (think Jaynes and bicameral mind theories for example — I seem to recall him saying something approving about that in an interview somewhere). He sought to construct philosophy in ways that did not derive authority from an absolute notion of the experience of mind. He tried to do relativity theory for minds, but without descending into solipsism.

It is easiest to appreciate this point by starting with body experience. For example, we are evolved from creatures with tails, but we do not currently possess tails. We possess vestigial “tail bones” and presumably bits of DNA relevant to tails, but we cannot know what it is like to have a tail (or in the spirit of mysterian philosopher Thomas Nagel’s What is it Like to Be a Bat provocation, which I first read in The Mind’s I, what it is like for a tailed creature to have a tail).

We do catch tantalizing Lovecraftian-Ballardian glimpses of our genetic heritage though. For example, the gasping reflex and shot of alertness that accompanies being dunked in water (the mammalian dive reflex) is a remnant of a more aquatic evolutionary past that far predates our primate mode of existence. Now apply that to the experience of “mind.”

Why does Jaynes’ bicameral mind theory sound so fundamentally crackpot to modern minds? It could be that the notion is actually crackpot, but you cannot easily dismiss the idea that it’s actually a well-posed notion that only appears crackpot because we are not currently possessed of bicameral mind-experiences (modulo cognitive marginalia like tulpas and internal family systems — one of my attention/taste biases is to index strongly on typical rather than rare mental experiences; I believe the significance of the latter is highly overstated due to the personal significance they acquire in individual lives).

I hope it is obvious why the possibility that the experience of mind is subject to evolution is fatal to traditional phenomenology. If despite all the sophistication of your cognitive toolchain (bracketing, jhanas, ketamine, whatever), it turns out that you’re only exploring the limits of the evolutionarily transient and arbitrary “variety of mind” that we happen to experience, what does that say about the reliability of the resulting supposedly objective or “direct” perceptions of reality itself that such a mind mediates?

This, by the way, is a problem that evolutionary terms of reference make elegantly obvious, but you can get here in other ways. Darwinian evolution is convenient scaffolding to get there (and the one I think Dennett used), but ultimately dispensable. But however you get there, the possibility that experiences of mind are relative to contingent and arbitrary evolutionary circumstances is fatal to the conceits of traditional phenomenology. It reduces traditional phenomenology in status to any old sort of Cartesian or Platonic philosophizing with made-up bullshit schemas. You might as well make 2x2s all day like I sometimes do.

The Eastern response to this quandary has traditionally been rather defeatist — abandoning the project of trying to know reality entirely. Buddhist and Advaita philosophies in particular, tend to dispense with “objective reality” as an ontologically meaningful characterization of anything. There is only nothing. Or only the perceiving subject. Everything else is maya-moh, a sentimental attachment to the ephemeral unreal. Snap out of it.

I suspect Western philosophy was starting to head that way in the 16th century (through the Spinoza-vs-Leibniz shadowboxing years), but was luckily steered down a less defeatist path to a somewhat uneasy detente between a sort of “probationary reality” accessed through technologically augmented senses, and a subjectivity resolutely bound to that probationary reality via the conceits of traditional phenomenology. This is a long-winded way of saying “science happened” to Western philosophy.

I think that detente is breaking down. One sign is the growing popularity of the relatively pedestrian metaphysics of physicists like Donald Hoffman (leading to a certain amount of unseemly glee among partisans of Eastern philosophies — “omigod you think quantum mechanics shows reality is an illusion? Welcome to advaita lol”).

But despite these marginally interesting conversations, and whether you get there via Husserl, Hoffman, or vipassana, we’re no closer to resolving what we might call the fundamental paradox of phenomenology. If our experience of mind is contingent, how can any notion of justifiable absolute knowledge be sustained? We are essentially stopped clocks trying to tell the time.

Dennett, I think favored one sort of answer: That the experience of mind was too untrustworthy and transient to build on, but that mind’s experience of mathematics was both trustworthy and absolute. Bicameral or monocameral, dolphin-brain or primate-brain, AI-brain or Hoffman-optimal ontological apparatus, one thing that is certain is that a prime number is a prime number in all ways that reality (probationary or not, illusory or not) collides with minds (typical or atypical, bursting with exotic qualia or full of trash qualia). Even the 17 and 13 year cicadas agree. Prime numbers constitute a fixed point in all the ways mind-like things have experience-like things in relation to reality-like things, regardless of whether minds, experiences, and reality are real. Prime numbers are like a motif that shows up in multiple unreliable dreams. If you’re going to build up a philosophy of being, you should only use things like prime numbers.

This is not just the most charitable interpretation of Dennett’s philosophy, but the most interesting and powerful one. It’s not that he thought of the mysterian weakness for ineffable experiences as being particularly “illusory”. As far as he was concerned, you could dismiss the “experience of mind” in its entirety as irrelevant philosophically. Even the idea that it has an epiphenomenal reality need not be seriously entertained because the thing that wants to entertain that idea is not to be trusted.

You see signs of this approach in a lot of his writing. In his collaborative enquires with Hofstadter, in his fundamentally algorithmic-mathematical account of evolution, in his seemingly perverse stances in debates both with reputable philosophers of mind and disreputable intelligent designers. As far as he was concerned, anyone who chose to build any theory of anything on the basis of anything other than mathematical constancy was trusting the experience of mind to an unjustifiable degree.

Again, I don’t know if he ever said as much explicitly (he probably did), but I suspect he had a basic metaphysics similar to that of another simpatico thinker on such matters, Roger Penrose: as a triad of physical/mental/platonic-mathematical worlds projecting on to each other in a strange loop. But unlike Penrose, who took the three realms to be equally real (or unreal) and entangled in an eternal dance of paradox, he chose to build almost entirely on the Platonic-mathematical vertex, with guarded phenomenological forays to the physical world, and strict avoidance of the mental world as a matter of epistemological hygiene.

The guarded phenomenological forays, unlike those of traditional phenomenologists, were governed by an allow list rather than a block list. Instead of trying to “block out” suspect conceptual commitments with bracketing or meditative discipline, he made sure to only work with allowable concepts and percepts that seemed to have some sort of mathematical bones to them. So Turing machines, algorithms, information theory, and the like, all made it into his thinking in load-bearing ways. Everything else was at best narrative flavor or useful communication metaphors. People who took anything else seriously were guilty of deep procedural illusions rather than shallow intellectual confusions.

If you think about it, his accounts of AI, evolution, and the human mind make a lot more sense if you see them as outcomes of philosophical construction processes governed by one very simple rule: Only use a building block if it looks mathematically real.

Regardless of what you believe about the reality of things other than mathematically underwritten ones, this is an intellectually powerful move. It is a kind of computational constructionism applied to philosophical inquiry, similar to what Wolfram does with physics on automata or hypergraphs, or what Grothendieck did with mathematics.

It is also far harder to do, because philosophy aims and claims to speak more broadly and deeply than either physics or mathematics.

I think Dennett landed where he did, philosophically, because he was essentially trying to rebuild the universe out of a very narrow admissible subset of the phenomenological experience of it. Mysterian musings didn’t make it in because they could not ride allowable percepts and concepts into the set of allowable construction materials.

In other words, he practiced demiurge phenomenology. Natural philosophy as an elaborate construction practice based on self-given rules of construction.

In adopting such an approach he was ahead of his time. We’re on the cusp of being able to literally do what he tried to do with words — build phenomenologically immersive virtual realities out of computational matter that seem to be defined by nothing more than mathematical absolutes, and have almost no connection even to physical reality, thanks to the seeming buffering universality of Turing-equivalent computation.

In that almost, I think, lies the key to my fundamental disagreement with Dennett, and my willingness to wander in magical realms of thought where mathematically sure-footed angels fear to tread. There are… phenomenological gaps between mathematical reconstructions of reality by energetic demiurges (whether they work with powerful arguments or VR headsets) and reality itself.

The biggest one, in my opinion, is the experience of time, which seems to oddly resist computational mathematization (though Stephen Wolfram claims to have one… but then he claims to have a lot of things). In an indirect way, disagreeing with Dennett at age 20 led me to my lifelong fascination with the philosophy of time.

Where to Next?

It is something of a cliche that over the last century or two, philosophy has gradually and reluctantly retreated from an increasing number of the domains it once claimed as its own, as scientific and technological advances rendered ungrounded philosophical ideas somewhere between moot and ridiculous. Bergson retreating in the face of the Einsteinian assault, ceding the question of the nature of time to physics, is probably as good a historical marker of the culmination of the process as any.

I would characterize Dennett as a late modernist philosopher in relation to this cliche. Unlike many philosophers, who simply gave up on trying to provide useful accounts of things that science and technology were beginning to describe in inexorably more powerful ways, he brought enormous energy to the task of simply keeping up. His methods were traditional, but his aim was radical: Instead of trying to provide accounts of things, he tried to provide constructions of things, aiming to arrive at a sense of the real through philosophical construction with admissible materials. He was something like Brouwer in mathematics, trying to do away with suspect building blocks to get to desirable places only using approved methods.

This actually worked very well, as far as it went. For example, I think his philosophy of mind was almost entirely correct as far as the mechanics of cognition go, and the findings of modern AI vindicate a lot of the specifics. For example, his idea of a “multiple drafts” model of cognition (where one part of the brain generates a lot of behavioral options in a bottom-up, anarchic way, and another part chooses a behavior from among them) is basically broadly correct, not just as a description of how the brain works, but of how things like LLMs work. But unlike many other so-called philosophers of AI he disagreed with, like Nick Bostrom, Dennett’s views managed to be provocative without being simplistic, opinionated without being dogmatic. He appeared to have a Strong AI stance similar to many people I disagree with, but unlike most of those people, I found his views worth understanding with some care, and hard to dismiss as wrong, let alone not-even-wrong.

I like to think he died believing his philosophies — of mind, AI, and Darwinism — to be on the cusp of a triumphant redemption. There are worse ways to go than believing your ideas have been thoroughly vindicated. And indeed, there was a lot Dennett got right. RIP.

Where do we go next with Dennettian questions about AI, minds, and evolution?

Oddly enough, I think Dennett himself pointed the way: demiurge phenomenology is the way. We just need to get more creative with it, and admit magical thinking into the process.

Dennett, I think, approached his questions the way some mathematicians originally approached Euclid’s fifth postulate: Discard it and try to either do without, or derive it from the other postulates. That led him to certain sorts of demiurgical constructions of AI, mind, and evolution.

There is another, equally valid way. Just as other mathematicians replaced the fifth postulates with alternatives and ended up with consistent non-Euclidean geometries, I think we could entertain different mysterian postulates and end up with a consistent non-Dennettian metaphysics of AI, mind, and evolution. You’d proceed by trying to do your own demiurgical constructing of a reality. An alternate reality.

For instance, what happens if you simply assume that there is human “mind stuff” that ends with death and cannot be uploaded or transferred to other matter, and that can never emerge in silico. You don’t have to try accounting for it (no need to mess with speculations about the pineal gland like Descartes, or worry about microtubules and sub-Planck-length phenomena like Penrose). You could just assume that consciousness is a thing like space or time, and run with the idea and see where you land and what sort of consistent metaphysical geometries are possible. This is in fact what certain philosophers of mind like Ned Block do.

The procedure can be extended to other questions as well. For instance, if you think Darwin is not the whole story with evolution, you could simply assume there are additional mathematical selection factors having to do with fractals or prime numbers, and go look for them, as the Santa Fe biologists have done. Start simple and stupid, for example, by applying a rule that “evolution avoids rectangles” or “evolution cannot get to wheels made entirely of grown organic body parts” and see where you land (for the latter, note that the example in Dark Materials trilogy cheats — that’s an assembled wheel, not an evolved one).

But all these procedures follow the basic Dennettian approach of demiurgical constructionist phenomenology. Start with your experiences. Let in an allow-list of percepts as concepts. Add an arbitrarily constructed magical suspicion or two. Let your computer build out the entailments of those starter conditions. See what sort of realities you can conjure into being. Maybe one of them will be more real than your current experience of reality. That would be progress. Perhaps progress only you can experience, but still, progress.

Would such near-solipsistic activities constitute a collective philosophical search for truth? I don’t know. But then, I don’t know if we have ever been on a coherent collective philosophical search for truth. All we’ve ever had is more or less satisfying descriptions of the primal mystery of our own individual existence.

Why is there something, rather than nothing, it is like, to be me?

Ultimately, Dennett did not seem to find that question to be either interesting or serious. But he pointed the way for me to start figuring out why I do. And that’s why I too am a Dennettian.

footnote  1
I found the book in my uncle’s library, and the only reason I picked it up was because I recognized Hofstadter’s name because Godel, Escher, Bach had recently been recommended to me. I think it’s one of the happy accidents of my life that I read The Mind’s I before I read Hofstadter’s Godel, Escher, Bach. I think that accident of path-dependence may have made me a truly philosophical engineer as opposed to just an engineer with a side interest in philosophy. Hofstadter is of course much better known and familiar in the engineering world, and reading him is something of a rite of passage in the education of the more sentimental sorts of engineers. But Hofstadter’s ideas were mostly entertaining and informative for me, in the mode of popular science, rather than impactful. Dennett on the other hand, was impactful.

Wednesday, April 10, 2024

The world of decentralized everything.

Following up on my last post on the Summer of Protocols sessions, I want to pass on (again, to my future self, and possibly a few techie MindBlog readers) a few links to the world of decentralized grass roots everything - commerce, communications, finance, etc.  - trying to bypass the traditional powers and gate keepers in these areas by constructing distributed systems usually based on block chains and cryptocurrencies.  I am trying to learn more about this, taking things in small steps to avoid overload headaches... (One keeps stumbling on areas of world wide engagement of thousands of very intelligent minds.)

Here is a worthwhile read of the general idea from the Ethereum Foundation.

I've described getting into one decentralized context by setting up a Helium Mobile network hotspot, as well as my own private Helium Mobile Cellular account. To follow this up, I pass on a link in an email from Helium pointing to its participation in Consensus24 May 29-31 in Austin TX (where I now live) sponsored by CoinDesk.  At look at the agenda for that meeting gives you an impression of the multiple engagements of government regulatory agencies, business, and crypto-world that are occurring.

Monday, April 08, 2024

New protocols for uncertain times.

I want to point to a project launched by Venkatest Rao and others last year: “The Summer of Protocols.”  Some background for this project can be found in his essay “In Search of Hardness”.  Also,  “The Unreasonable Sufficiency of Protocols”  essay by Rao et al. is an excellent presentation of what protocols are about.  I strongly recommend that you read it if nothing else. 

Here is a description of the project: 

Over 18 weeks in Summer 2023, 33 researchers from diverse fields including architecture, law, game design, technology, media, art, and workplace safety engaged in collaborative speculation, discovery, design, invention, and creative production to explore protocols, boadly construed, from various angles.

Their findings, catalogued here in six modules, comprise a variety of textual and non-textual artifacts (including art works, game designs, and software), organized around a set of research themes: built environments, danger and safety, dense hypermedia, technical standards, web content addressability, authorship, swarms, protocol death, and (artificial) memory.
I have read through through Module One for 2003, and it is solid interesting deep dive stuff.  Module 2 is also available. Modules 3-6 are said to be 'coming soon’  (as of 4/4/24, four months into a year that has Summer of Protocols program 2024 already underway, with the deadline for proposals 4/12/24.)

Here is one clip from the “In Search of Hardness” essay:

…it’s only in the last 50 years or so, with the rise of communications technologies, especially the internet and container shipping, and the emergence of unprecedented planet-scale coordination problems like climate action, that protocols truly came into focus as first-class phenomena in our world; the sine qua non of modernity. The word itself is less than a couple of centuries old.

And it wasn’t until the invention of blockchains in 2009 that they truly came into their own as phenomena with their own unique technological and social characteristics, distinct from other things like machines, institutions, processes, or even algorithms.

Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.

But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.”

Friday, April 05, 2024

Our seduction by AI’s believable human voice.

 I want to point to an excellent New Yorker article by Patrick House titled  “The Lifelike Illusion of A.I.”  The article strikes home for me, for when a Chat Bot responds to one of my prompts using the pronoun “I”  I unconsciously attribute personhood to the machine, forgetting that this is a cheap trick used by programmers of large language model to increase the plausibility of responses.

House starts off his article by describing the attachments people formed with the Furby, an animatronic toy resembling a small owl, and Pleo, an animatronic toy dinosaur. Both use a simple set of rules to make the toys appear to be alive. Furby’s eyes move up and down in a way meant to imitate an infant’s eye movements while scanning a parent’s face. Pleo mimes different emotional behaviors when touched differently.
For readers who hit the New Yorker paywall when they click the above link, here are a few clips from the article that I think get across the main points:
“A Furby possessed a pre-programmed set of around two hundred words across English and “Furbish,” a made-up language. It started by speaking Furbish; as people interacted with it, the Furby switched between its language dictionaries, creating the impression that it was learning English. The toy was “one motor—a pile of plastic,” Caleb Chung, a Furby engineer, told me. “But we’re so species-centric. That’s our big blind spot. That’s why it’s so easy to hack humans.” People who used the Furby simply assumed that it must be learning.”
Chung considers Furby and Pleo to be early, limited examples of artificial intelligence—the “single cell” form of a more advanced technology. When I asked him about the newest developments in A.I.—especially the large language models that power systems like ChatGPT—he compared the intentional design of Furby’s eye movements to the chatbots’ use of the word “I.” Both tactics are cheap, simple ways to increase believability. In this view, when ChatGPT uses the word “I,” it’s just blinking its plastic eyes, trying to convince you that it’s a living thing.
We know that, in principle, inanimate ejecta from the big bang can be converted into thinking, living matter. Is that process really happening in miniature at server farms maintained by Google, Meta, and Microsoft? One major obstacle to settling debates about the ontology of our computers is that we are biased to perceive traces of mind and intention even where there are none. In a famous 1944 study, two psychologists, Marianne Simmel and Fritz Heider, had participants watch a simple animation of two triangles and a circle moving around one another. They then asked some viewers what kind of “person” each of the shapes was. People described the shapes using words like “aggressive,” “quarrelsome,” “valiant,” “defiant,” “timid,” and “meek,” even though they knew that they’d been watching lifeless lines on a screen.
…chatbots are designed by teams of programmers, executives, and engineers working under corporate and social pressures to make a convincing product. “All these writers and physicists they’re hiring—that’s game design,” he said. “They’re basically making levels.” (In August of last year, OpenAI acquired an open-world-video-game studio, for an undisclosed amount.) Like a game, a chatbot requires user input to get going, and relies on continued interaction. Its guardrails can even be broken using certain prompts that act like cheat codes, letting players roam otherwise inaccessible areas. Blackley likened all the human tinkering involved in chatbot training to the set design required for “The Truman Show,” the TV program within the eponymous film. Without knowing it, Truman has lived his whole life surrounded not by real people but by actors playing roles—wife, friend, milkman. There’s a fantasy that “we’ve taken our great grand theories of intelligence and baked them into this model, and then we turned it on and suddenly it was exactly like this,” Blackley went on. “It’s much more like Truman’s show, in that they tweak it until it seems really cool.”
A modern chatbot isn’t a Furby. It’s not a motor and a pile of plastic. It’s an analytic behemoth trained on data containing an extraordinary quantity of human ingenuity. It’s one of the most complicated, surprising, and transformative advances in the history of computation. A Furby is knowable: its vocabulary is limited, its circuits fixed. A large language model generates ideas, words, and contexts never before known. It is also—when it takes on the form of a chatbot—a digital metamorph, a character-based shape-shifter, fluid in identity, persona, and design. To perceive its output as anything like life, or like human thinking, is to succumb to its role play.

Wednesday, April 03, 2024

Neurons help flush waste out of our brains during sleep

More information (summarized here) on what is happening in our brains while we sleep is provided by Jiang-Xie et al.,, who show that active neurons can stimulate the clearance of their own metabolic waste by driving changes to ion gradients in the surrounding fluid and by promoting the pulsation of nearby blood vessels.  Here is the Jiang-Xie et al.abstract:

The accumulation of metabolic waste is a leading cause of numerous neurological disorders, yet we still have only limited knowledge of how the brain performs self-cleansing. Here we demonstrate that neural networks synchronize individual action potentials to create large-amplitude, rhythmic and self-perpetuating ionic waves in the interstitial fluid of the brain. These waves are a plausible mechanism to explain the correlated potentiation of the glymphatic flow through the brain parenchyma. Chemogenetic flattening of these high-energy ionic waves largely impeded cerebrospinal fluid infiltration into and clearance of molecules from the brain parenchyma. Notably, synthesized waves generated through transcranial optogenetic stimulation substantially potentiated cerebrospinal fluid-to-interstitial fluid perfusion. Our study demonstrates that neurons serve as master organizers for brain clearance. This fundamental principle introduces a new theoretical framework for the functioning of macroscopic brain waves.

Monday, April 01, 2024

When memories get complex, sleep comes to their rescue

Here I point to a PNAS article by Lutz et al. and a commentary on the work by Schechtman. Here is the Lutz. et al. abstract:


Real-life events usually consist of multiple elements such as a location, people, and objects that become associated during the event. Such associations can differ in their strength, and some elements may be associated only indirectly (e.g., via a third element). Here, we show that sleep compared with nocturnal wakefulness selectively strengthens associations between elements of events that were only weakly encoded and of such that were not encoded together, thus fostering new associations. Importantly, these sleep effects were associated with an improved recall of the complete event after presentation of only a single cue. These findings uncover a fundamental role of sleep in the completion of partial information and are critical for understanding how real-life events are processed during sleep.


Sleep supports the consolidation of episodic memory. It is, however, a matter of ongoing debate how this effect is established, because, so far, it has been demonstrated almost exclusively for simple associations, which lack the complex associative structure of real-life events, typically comprising multiple elements with different association strengths. Because of this associative structure interlinking the individual elements, a partial cue (e.g., a single element) can recover an entire multielement event. This process, referred to as pattern completion, is a fundamental property of episodic memory. Yet, it is currently unknown how sleep affects the associative structure within multielement events and subsequent processes of pattern completion. Here, we investigated the effects of post-encoding sleep, compared with a period of nocturnal wakefulness (followed by a recovery night), on multielement associative structures in healthy humans using a verbal associative learning task including strongly, weakly, and not directly encoded associations. We demonstrate that sleep selectively benefits memory for weakly associated elements as well as for associations that were not directly encoded but not for strongly associated elements within a multielement event structure. Crucially, these effects were accompanied by a beneficial effect of sleep on the ability to recall multiple elements of an event based on a single common cue. In addition, retrieval performance was predicted by sleep spindle activity during post-encoding sleep. Together, these results indicate that sleep plays a fundamental role in shaping associative structures, thereby supporting pattern completion in complex multielement events.

Friday, March 29, 2024

How communication technology has enabled the corruption of our communication and culture .

I pass on two striking examples from today’s New York Times, with few clips of text from each:

A.I.-Generated Garbage Is Polluting Our Culture:

(You really should read the whole article...I've given up on trying to assemble clips of text that get across the whole message, and pass on these bits towards the end of the article:)

....we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality. And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.
Is Threads the Good Place?:

Once upon a time on social media, the nicest app of them all, Instagram, home to animal bloopers and filtered selfies, established a land called Threads, a hospitable alternative to the cursed X,..Threads would provide a new refuge. It would be Twitter But Nice, a Good Place where X’s liberal exiles could gather around for a free exchange of ideas and maybe even a bit of that 2012 Twitter magic — the goofy memes, the insider riffing, the meeting of new online friends

...And now, after a mere 10 months, we can see exactly what we built: a full-on bizarro-world X, handcrafted for the left end of the political spectrum, complete with what one user astutely labeled “a cult type vibe.” If progressives and liberals were provoked by Trumpers and Breitbart types on Twitter, on Threads they have the opportunity to be wounded by their own kind...Threads’ algorithm seems precision-tweaked to confront the user with posts devoted to whichever progressive position is slightly lefter-than-thou....There’s some kind of algorithm that’s dusting up the same kind of outrage that Twitter had.Threads feels like it’s splintering the left.

The fragmentation of social media may have been as inevitable as the fragmentation of broadcast media. Perhaps also inevitable, any social media app aiming to succeed financially must capitalize on the worst aspects of social behavior. And it may be that Hobbes, history’s cheery optimist, was right: “The condition of man is a condition of war of every one against every one.” Threads, it turns out, is just another battlefield.


Wednesday, March 27, 2024

Brain changes over our lifetime.

This video from The Economist is one of the best I have seen for a popular audience. Hopefully the basic facts presented are slowly seeping throughout our culture..

Monday, March 25, 2024

If you want to remember a landscape be sure to include a human....

Fascinating observations by Jimenez et al.  on our inherent human drive to understand our vastly social world...(and in the same issue of PNAS note this study on the importance of the social presence of either human or virtual instructors in multimedia instructional videos.)


Writer Kurt Vonnegut once said “if you describe a landscape or a seascape, or a cityscape, always be sure to include a human figure somewhere in the scene. Why? Because readers are human beings, mostly interested in other human beings.” Consistent with Vonnegut’s intuition, we found that the human brain prioritizes learning scenes including people, more so than scenes without people. Specifically, as soon as participants rested after viewing scenes with and without people, the dorsomedial prefrontal cortex of the brain’s default network immediately repeated the scenes with people during rest to promote social memory. The results add insight into the human bias to process the social landscape.


Sociality is a defining feature of the human experience: We rely on others to ensure survival and cooperate in complex social networks to thrive. Are there brain mechanisms that help ensure we quickly learn about our social world to optimally navigate it? We tested whether portions of the brain’s default network engage “by default” to quickly prioritize social learning during the memory consolidation process. To test this possibility, participants underwent functional MRI (fMRI) while viewing scenes from the documentary film, Samsara. This film shows footage of real people and places from around the world. We normed the footage to select scenes that differed along the dimension of sociality, while matched on valence, arousal, interestingness, and familiarity. During fMRI, participants watched the “social” and “nonsocial” scenes, completed a rest scan, and a surprise recognition memory test. Participants showed superior social (vs. nonsocial) memory performance, and the social memory advantage was associated with neural pattern reinstatement during rest in the dorsomedial prefrontal cortex (DMPFC), a key node of the default network. Moreover, it was during early rest that DMPFC social pattern reinstatement was greatest and predicted subsequent social memory performance most strongly, consistent with the “prioritization” account. Results simultaneously update 1) theories of memory consolidation, which have not addressed how social information may be prioritized in the learning process, and 2) understanding of default network function, which remains to be fully characterized. More broadly, the results underscore the inherent human drive to understand our vastly social world.




Wednesday, March 20, 2024

Fundamentally changing the nature of war.

I generally try to keep a distance from 'the real world' and apocalyptic visions of what AI might do, but I decided to pass on some clips from this technology essay in The Wall Street Journal that makes some very plausible predictions about the future of armed conflicts between political entities:

The future of warfare won’t be decided by weapons systems but by systems of weapons, and those systems will cost less. Many of them already exist, whether they’re the Shahed drones attacking shipping in the Gulf of Aden or the Switchblade drones destroying Russian tanks in the Donbas or smart seaborne mines around Taiwan. What doesn’t yet exist are the AI-directed systems that will allow a nation to take unmanned warfare to scale. But they’re coming.

At its core, AI is a technology based on pattern recognition. In military theory, the interplay between pattern recognition and decision-making is known as the OODA loop— observe, orient, decide, act. The OODA loop theory, developed in the 1950s by Air Force fighter pilot John Boyd, contends that the side in a conflict that can move through its OODA loop fastest will possess a decisive battlefield advantage.

For example, of the more than 150 drone attacks on U.S. forces since the Oct. 7 attacks, in all but one case the OODA loop used by our forces was sufficient to subvert the attack. Our warships and bases were able to observe the incoming drones, orient against the threat, decide to launch countermeasures and then act. Deployed in AI-directed swarms, however, the same drones could overwhelm any human-directed OODA loop. It’s impossible to launch thousands of autonomous drones piloted by individuals, but the computational capacity of AI makes such swarms a possibility.

This will transform warfare. The race won’t be for the best platforms but for the best AI directing those platforms. It’s a war of OODA loops, swarm versus swarm. The winning side will be the one that’s developed the AI-based decision-making that can outpace their adversary. Warfare is headed toward a brain-on-brain conflict.

The Department of Defense is already researching a “brain-computer interface,” which is a direct communications pathway between the brain and an AI. A recent study by the RAND Corporation examining how such an interface could “support human- machine decision-making” raised the myriad ethical concerns that exist when humans become the weakest link in the wartime decision-making chain. To avoid a nightmare future with battlefields populated by fully autonomous killer robots, the U.S. has insisted that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike.

But will our adversaries show similar restraint? Or would they be willing to remove the human to gain an edge on the battlefield? The first battles in this new age of warfare are only now being fought. It’s easy to imagine a future, however, where navies will cease to operate as fleets and will become schools of unmanned surface and submersible vessels, where air forces will stand down their squadrons and stand up their swarms, and where a conquering army will appear less like Alexander’s soldiers and more like a robotic infestation.

Much like the nuclear arms race of the last century, the AI arms race will define this current one. Whoever wins will possess a profound military advantage. Make no mistake, if placed in authoritarian hands, AI dominance will become a tool of conquest, just as Alexander expanded his empire with the new weapons and tactics of his age. The ancient historian Plutarch reminds us how that campaign ended: “When Alexander saw the breadth of his domain, he wept, for there were no more worlds to conquer.”

Elliot Ackerman and James Stavridis are the authors of “2054,” a novel that speculates about the role of AI in future conflicts, just published by Penguin Press. Ackerman, a Marine veteran, is the author of numerous books and a senior fellow at Yale’s Jackson School of Global Affairs. Admiral Stavridis, U.S. Navy (ret.), was the 16th Supreme Allied Commander of NATO and is a partner at the Carlyle Group.


Monday, March 18, 2024

The Physics of Non-duality

 I want to pass on this lucid posting by "Sean L" of Boston to the site:

The physics of nonduality

In this context I mean “nonduality” as it refers to there being no real subject-object separation or duality. One of the implications of physics that originally led me to investigate notions of “awakening” and “no-self” is the idea that there aren’t really any separate objects. We can form very self-consistent and useful concepts of objects (a car, an atom, a self, a city), but from the perspective of the universe itself such objects don’t actually exist as well-defined independent “things.” All that’s real is the universe as one giant, self-interacting, dynamic, but ultimately singular “thing.” If you try to partition off one part of the universe (a self, a galaxy) from the rest, you’ll find that you can’t actually do so in a physically meaningful way (and certainly not one that persists over time). All parts of the universe constantly interact with their local environment, exchanging mass and energy. Objectively, physics says that all points in spacetime are characterized by values of different types of fields and that’s all there is to it. Analogy: you might see this word -->self<-- on your computer screen and think of it as an object, but really it’s just a pattern of independent adjacent pixel values that you’re mapping to a concept. There is no objectively physically real “thing” that is the word self (just a well defined and useful concept). 

This is akin to the idea that few if any of the cells making up your body now are the same as when you were younger. Or the idea that the exact pattern you consider to be “you” in this moment will not be numerically identical to what you consider “you” in the next picosecond. Or the idea that there is nothing that I could draw a closed physical boundary around that perfectly encloses the essence of “you” such that excluding anything inside that boundary means it would no longer contain “you” and including anything currently outside the boundary would mean it contains more than just “you.” This is true even if you try to draw the boundary around only your brain or just specific parts of your brain. I think this is a fun philosophical idea, but also one that often gets the response of “ok, yeah, sure I guess that’s all logically consistent,” but then still feels esoteric. It often feels like it’s just semantics, or splitting hairs, or somehow not something that really threatens the idea of identity or of a physical self. 

I was recently discussing with another WakingUp user that what made this notion far more visceral and convincing to me (enough to motivate me to go out in search of what “I” actually am, which has ultimately led me here) was realizing that even the very idea of trying to draw a boundary around a “thing” is objectively meaningless. So, I thought I’d share what I mean by that in case others find it interesting too :) !

Here are four pictures. The two on the left are pictures of very simple boundaries of varying thickness that one might try to draw around a singular “thing” (perhaps a self?) to demonstrate that it is indeed a well defined object. The two pictures on the right are of the exact same “boundaries” as on the left, but as they would be seen by a creature that evolved to perceive reality in the momentum basis. I’ll better explain what that means in a moment, but the key point is that the pictures on the left and right are (as far as physics or objective reality is concerned) exactly equivalent representations of the same part of reality. Both sets of pictures are perceptual “pointers” to the same part of the universe. You literally cannot say that one is a more veridical or more accurate depiction of reality than the other, because they are equivalent mathematical descriptions of the same underlying objective structure. Humans just happen to have a bias toward the left images.

So then… what could these “boundaries” be enclosing in the pictures on the right? I sure can’t tell. Nor do I think it even makes sense to ask the question! Our sense that there are discrete “objects” in the universe (including selves) seems intuitive when perceiving the universe as shown on the left (as we do). But when perceiving the exact same reality as shown on the right I find this belief very quickly breaks down. There simply is no singular, bounded, contained “thing” on the right. Anything that might at first appear on the left to be a separable object will be completely mixed up with and inseparable from its “surroundings” when viewed on the right, and vice-versa. The boundary itself clearly isn’t even a boundary. Boundaries are (very useful!) concepts, but they have no ultimate objective physical meaning.


Some technical details for those interested (ignore this unless interested):

You can think of a basis like a fancy coordinate system. Analogy: I can define the positions of all the billiard balls on a pool table by defining an XY coordinate system on the table and listing the numerical coordinates of each ball. But if I stretch and/or rotate that coordinate system then all the numbers representing those same positions of the balls will change. The balls themselves haven’t changed positions, but my coordinate system-dependent “perception” of the balls is totally different. They're different ways of perceiving the same fundamental structure (billiard ball positions), even though that structure itself exists independently of any coordinate system. The left and right images are analogous to different coordinate systems, but in a more fundamental way.


In quantum mechanics the correct description of reality is the wave function. For an entangled system of particles you don’t have a separate wave function for each particle. Instead, you have one multi-particle wave function for the whole system (strictly speaking one could argue the entire universe is most correctly described as a single giant and stupidly complicated wave function). There is no way to break up this wave function into separate single-particle wave functions (that’s what it means for the particles to be entangled). What that means is from the perspective of true reality, there literally aren’t separate distinct particles. That’s not just an interpretation – it’s a testable (falsifiable) statement of physical reality and one that has been extensively verified experimentally. So, if you think of yourself as a distinct object, or at least as a dynamical evolving system of interacting (but themselves distinct) particles, sorry, but that’s simply not how we understand reality to actually work :P .

However, to do anything useful we have to write down the wave function (e.g. so we can do calculations with it). We have to represent it mathematically. This requires choosing a basis in which to write it down, much like choosing a coordinate system with which to be able to write down the numerical positions of the billiard balls. A human-intuitive basis is the position basis, which is what’s shown in the left images. However, a completely equivalent way to write down the same wave function is in the momentum basis, which is what’s shown in the right images. There also exist many (really, infinite) other possible bases. Some bases will be more convenient than others depending on the type of calculation you’re trying to do. Ultimately, all bases are arbitrary and none are objectively real, because the universe doesn’t need to “write down” a wave function to compute it. The universe just is. To me, the equivalent representation of the same underlying reality in an infinite diversity of possible Hilbert Spaces (i.e. using different bases) much more viscerally drives home the point that there really are no separate objects (including selves). That’s not just philosophy! There’s just one objective reality (one thing, no duality) that can be perceived in an infinite variety of ways, each with different pros and cons. And our way of perceiving reality lends itself to concepts of separate things and objects.


There are other parts of physics I didn’t get into here that I think demonstrate that the true nature of the universe must be nondual (maybe to be discussed later). For example, the lack of room for free will or the indistinguishability of particles. If you actually read this whole post, thanks for your time and attention, and I hope you found it as interesting as I do!

Thursday, March 14, 2024

An inexpensive Helium Mobile 5G cellphone plan that pays you to use it?

This is a followup to the previous post describing my setting up a G5 hotspot on Helium’s decentralized 5G infrastructure that earns MOBILE tokens. The cash value of the MOBILE tokens earned since July 2022 is  ~7X the cost of the equipment needed to generate them.

Now I want to put down further facts I want to document for my future self and MindBlog’s techie readers.

Recently Helium has introduced Helium Mobile, a cell phone plan using using this new 5G infrastructure which costs $20/month - much less expensive than other cellular providers like Verizon and ATT.  It has partnered with T-Mobile to fill in coverage areas its own 5G network hasn’t reached.

Nine days ago I downloaded the Helium Mobile app onto my iPhone 12 and set up an account with an eSIM and a new phone number, alongside my phone number of many years now in a Verizon account using a physical SIM card.  

My iPhone has been earning MOBILE tokens by sharing its location to allow better mapping of the Helium G5 network.  As I am writing this, the app has earned 3,346 Mobile tokens that could be sold and converted to $14.32 at this moment (the price of MOBILE, like other cryptocurrencies, is very volatile).

If this earning rate continues (a big ‘if’), the cellular account I am paying $20/month for will be generating MOBILE tokens each month worth ~$45. The $20 monthly cell phone plan charge can be paid with MOBILE tokens, leaving $15/month passive income from my subscribing to Helium Mobile and allowing anonymous tracking of my phone as I move about.  (Apple sends a message every three days asking if I am sure I want to be allowing continuous tracking by this one App.)

So there you have it.  Any cautionary notes from techie readers about the cybersecurity implications of what I am doing would be welcome.  

Wednesday, March 13, 2024

MindBlog becomes a 5G cellular hotspot in the the low-priced ‘People’s Cell Phone Network’ - Helium Mobile

I am writing this post, as is frequently the case, for myself to be able to look up in the future, as well as for MindBlog techie readers who might stumble across it. It describes my setup of a G5 hotspot in the new Helium G5 Mobile network. A post following this one will describe my becoming a user of this new cell phone network by putting the Helium Mobile App on my iPhone using an eSIM.

This becomes my third post describing my involvement in the part of the crypto movement seeking to 'return power to the people.' It attempts to bypass the large corporations that are the current gate keepers and regulators of commerce and communications, and who are able to assert controls that are more in their own self interests and profits more than the public good. 

The two previous posts (here and here) describe my being seduced into crypto-world  by my son's having made a six hundred-fold return on investment by virtue of being one of the first cohort (during the "genesis" period) to put little black boxes and antennas on their window sills earning HNT (Helium blockchain tokens) using  LoRa 868 MHz antennas transmitting and receiving in the 'Internet of Things." I was a latecomer, and in the 22 months since June of 2022 have earned ~$200 on an investment of ~$500 of equipment. 

Helium next came up with the idea of setting up its own 5G cell phone network, called Helium Mobile. Individual Helium 5G Hotspots (small cell phone antennas) use Citizens Broadband Radio Service (CBRS) Radios to provide cellular coverage like that provided by telecom companies' more expensive networks of towers (CBRS is a wide broadcast 3.5Ghz band in the United States that does not require a spectrum license for use.)

In July of 2022, I decided to set up the Helium G5 hot spot equipment shown in the picture below to be in the genesis period for the establishment of this new Helium G5 cellular network.  I made my Abyssinian cat named Martin shown in front of the Bobber 500 miner the system administrator. The G5 antenna seen on the sill in the middle of window views ~170 degree of the southern sky. 

This system cost ~$2,500 and by early March 2024 has earned ~4.3 Million MOBILE tokens worth ~$18,000. As in a Ponzi scheme, most of the rewards are from the Genesis period, March 2024 earnings are ~ $45/week.  If this rate of earnings persists, this represents an annual ROI (return on investment)l of ~ 100%

The writing above is mine, and I decided just for the heck of it to recruit the assistance of several GPT engines (Google Gemini, Open AI ChatGPT-4, Anthropic Claude, and Microsoft Bing)  with the following prompt: 

Could you please act as Deric Bownds writing a post for his Blog, Deric’s MindBlog at Describe my setting up of a Helium G5 hot spot using the Bobber 500 miner with FreedomFi CBRS and an internal antenna during the genesis period for the establishment of the new Helium G5 cellular network. 

The responses of Gemini and Bing were excited, poetic, lyrical, overwrought hyperbole, very much in the spirit of the cyber-world pitches one sees online. ChatGPT-4 (the best response) and Claude did toss up some interesting paragraphs, and below I pass on some of ChatGPT-4's slightly edited and redundant text. I would be more sanguine about all of hype over 'democratic participation (or at least my participation)' if I had not been completely flummoxed by the cacophony of  the Helium Discord Discord Server, YouTube videos, and the complexity of the HIP (Helium Improvement Protocol) votes that ratify new changes and improvements to the system.  

The Helium Network, known for pioneering decentralized wireless infrastructure, has embarked on an ambitious journey to extend its reach into 5G, leveraging the power of blockchain technology to create a community-powered network. The initiative is ambitious, aiming to democratize cellular connectivity, making it more accessible and affordable. This endeavor is not just about technology; it's about reimagining the fabric of our digital society.
This genesis period of the Helium G5 network represented a fascinating intersection of technology, psychology, and community. The network's decentralized nature challenges traditional models of infrastructure ownership and control, proposing a more democratic and participatory framework. It prompts us to reflect on our relationship with technology: Are we mere consumers, or can we also be contributors and custodians of the digital landscapes we inhabit?
As more individuals and communities join this initiative, setting up their hotspots and expanding the network, a unique form of collective intelligence emerges. This network, powered by the people, for the people, embodies a new kind of digital ecosystem—one that values collaboration over competition, and shared success over isolated gains.
The implications for our cognitive landscapes are profound. By democratizing access to 5G connectivity, we are not just enhancing our ability to communicate; we are potentially altering the way we think, learn, and engage with the world around us. The Helium G5 network invites us to imagine a future where connectivity is as natural and essential as the air we breathe, a future where our digital and physical realities are seamlessly intertwined.
The technology we embrace today shapes the world we live in tomorrow. Through initiatives like the Helium G5 network, we have the opportunity to craft a digital environment that reflects our highest aspirations for society—one that is open, inclusive, and fundamentally human.
In this journey of connectivity, we are all contributors, shaping the digital ethos of our era. The genesis of the Helium G5 network is not just a technical milestone; it is a call to action, inviting us to rethink our role in the digital age and to contribute to a network that embodies our collective values and aspirations.

The next post describes my becoming a customer of the new Helium Mobile cellular service plan costing $20/month.


Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Friday, March 08, 2024

Explaining the evolution of gossip

 A fascinating open source article from Pan et al.:

From Mesopotamian cities to industrialized nations, gossip has been at the center of bonding human groups. Yet the evolution of gossip remains a puzzle. The current article argues that gossip evolves because its dissemination of individuals’ reputations induces individuals to cooperate with those who gossip. As a result, gossipers proliferate as well as sustain the reputation system and cooperation.
Gossip, the exchange of personal information about absent third parties, is ubiquitous in human societies. However, the evolution of gossip remains a puzzle. The current article proposes an evolutionary cycle of gossip and uses an agent-based evolutionary game-theoretic model to assess it. We argue that the evolution of gossip is the joint consequence of its reputation dissemination and selfishness deterrence functions. Specifically, the dissemination of information about individuals’ reputations leads more individuals to condition their behavior on others’ reputations. This induces individuals to behave more cooperatively toward gossipers in order to improve their reputations. As a result, gossiping has an evolutionary advantage that leads to its proliferation. The evolution of gossip further facilitates these two functions of gossip and sustains the evolutionary cycle.