Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Wednesday, April 12, 2023

The Physics of Intelligence - and LDMs (Large Danger Models)

I want to pass on my abstracting of an interesting article by Venkatesh Rao, another instance of my using MindBlog as my personal filing system to be sure I can come back to - and refresh my recall of - ideas I think are important.  I also pass on ChatGPT 3.5 and ChatGPT 4's summaries of my summary!

The Physics of Intelligence   -  The missing discourse of AI

There are strong philosophy and engineering discourses, but no physics discourse. This is a problem because when engineers mainline philosophy questions in engineering frames without the moderating influence of physics frames, you get crackpottery…I did not say the physics of artificial intelligence…The physics of intelligence is no more about silicon semiconductors or neurotransmitters than the physics of flight is about feathers or aluminum.

Attention is the focus of one of the six basic questions about the physics of intelligence that I’ve been thinking about. Here is my full list:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?
 

The first three are obviously at the “physics of intelligence” level of abstraction, just as “wing” is at the “physics of flight” level of abstraction. The last three get more abstract, and require some constraining, but there are already some good ideas floating around on how to do the constraining…We are not talking about the physics of computation in general…computation and intelligence are not synonymous or co-extensive…To talk about intelligence, it is necessary, but not sufficient, to talk about computation. You also have to talk about the main aspects of embodiment: spatial and temporal extent, materiality, bodily integrity maintenance in relation to environmental forces, and thermodynamic boundary conditions.
 

What is attention, and how does it work?

A computer is “paying attention” to the data and instructions in the CPU registers in any given clock cycle…but fundamentally, attention is not a design variable used in complex ways in basic computing. You could say AI begins when you start deploying computational attention in a more dynamic way.

Attention is to intelligence as wing is to flight. The natural and artificial variants have the same sort of loose similarity. Enough that using the same word to talk about both is justified…In AI, attention refers primarily to a scheme of position encoding of a data stream. Transformer models like GPT keep track of the position of each token in the input and output streams, and extract meaning out of it. Where a word is in a stream matters almost as much as what the word is.

You can interpret these mechanisms as attention in a human sense. What is in the context of a text? In text streams, physical proximity (tokens before and after), syntactic proximity (relationship among clauses and words in a formal grammatical sense) and semantic proximity (in the sense of some words, including very distant ones, being significant in the interpretation of others) all combine to create context. This is not that different from how humans process text. So at least to first order, attention in human and artificial systems is quite similar.

But as with wings, the differences matter. Human attention, arguably, is not primarily about information processing at all. It is about energy management. We pay attention to steady our energy into a steady, sustainable, and renewable flow. We engage in low-information activities like meditation, ritual, certain kinds of art, and prayer to learn to govern our attention in specific ways. This is not to make us better at information processing, but for other reasons, such as emotion regulation and motivation. Things like dopamine loops of satisfaction are involved. The use of well-trained attention for better information processing is only one of the effects.

Overall, human attention is more complex and multifaceted than AI attention, just as bird wings are fundamentally more complex mechanically. Attention in the sense of position-encoding for information processing is like the pure lift function of a wing. Human attention, in addition, serves additional functions analogous to control and propulsion type functions.

What role does memory play in intelligence?

The idea of attention leads naturally to the idea of memory. Trivially, memory is a curated record of everything you’ve paid attention to in the past…An obvious way to understand current developments in AI is to think of LLMs and LIMs as idiosyncratically compressed atemporal textual and visual memories. Multimodal models can be interpreted as generalizations of this idea.

Human memory is modulated by evolving risk perceptions as it is laid down, and emotions triggered by existing memories regulates current attention, determining what new data gets integrated into the evolving model (as an aside, this is why human memory exists as a kind of evolving coherent narrative of self, rather than just as a pile of digested stuff).

Biological organism memory is not just an undifferentiated garbage record (LGM) of what you paid attention to in the past; it shapes what you pay attention to in the future very directly and continuously. Biological memory is strongly opinionated memory. If a dog bites you…you can’t afford to separate training and inference in the sense of “training” on a thousand dog encounters…you have to use your encounter with Dog 1 to shape your attentional response to Dog 2. Human memories are like LGMs, except that the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. There’s a term for this in psychology (predictive coding or predictive processing) with a hand-wavy theory of energy-minimization attached, but I don’t find it fully satisfying.

I have a placeholder name for this scheme, but as yet it’s not very fleshed out. Biological memories are Large Danger Models (LDMs).

Why just danger? Why not other signals and drives towards feeding, sex, interestingness, poetry, and so on? I have a stronger suspicion that danger is all you need to generate human-like memory, and in particular human-like experience of time. Human memory is the result of playing to continue the game, ie an infinite-game orientation. Once you have that in place, everything else emerges. It’s not as fundamental as basic survival.

AIs don’t yet count as human-equivalent to me: they’re in no danger, ever. Since we’re in the brute-force stage of training AI models, we train them on basically everything we have, with no danger signal accompanying any of it…AIs today develop their digested memories with no ongoing encoding or capture of the evolving risk and emotion assessments that modulate human memories. Even human grade schools, terrible as they are, do a better job than AI training protocols…the next big leap should be achievable by encoding some kind of environmental risk signal. Ie, we just need to put AIs in danger in the right way. My speculative idea of LDMs don’t seem that mysterious. LDMs are an engineering moonshot, not a physics mystery.

To lay it out more clearly, consider a thought experiment...Suppose you put a bunch of AIs in robot bodies, and let them evolve naturally, while scrounging resources for survival. To keep it simple, let’s say they only compete over scarce power outlets to charge their batteries. Their only hardcoded survival behavior is to plug in when running low….Let’s say the robots are randomly initialized to pay attention differently to different aspects of data coursing through them. Some robots pay extra attention to other robots’ actions. Other robots pay extra attention to the rocks in the environment. Obviously, the ones that happen to pay attention in the right ways will end up outcompeting the ones who don’t. The sneaky robots will evolve to be awake when other robots are powered down or hibernating for self-care, and hog the power outlets then. The bigger robots will learn they can always get the power outlets by shoving the smaller ones out of the way.

Now the question is: given all the multimodal data flowing through them, what will the robots choose to actually remember in their limited storage spaces, as their starter models get trained up? What sorts of LDMs will emerge? How will the risk modulation emerge? What equivalent of emotional regulation will emerge? What sense of time will emerge?

The thought experiment of LDMs suggests a particular understanding of memory in relation to intelligence: memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices....It’s a bit of a mouthful, but I think that’s fundamentally it.

I suspect the next generation of AI models will include some such embodiment feedback loop so memory goes from being brute-force generic persistence to persistence that’s linked to embodied behaviors in a feedback loop exposed to environmental dangers that act as survival pressures.

The resulting AIs won’t be eidetic idiot savants, and less capable in some ways, but will be able to survive in environments more dangerous than datacenters exposed to the world only through sanitized network connections. Instead of being Large Garbage Models (LGMs), they’ll be Large Danger Models (LDMs).
 

How is intelligence related to information?
 

We generally think about information as either primitive (you just have to know it) or entailed (you can infer it from what you already know)…Primitive information is a kind of dehydrated substance to which you can add compute (water) to expand it. Entailed information can be dehydrated into primitive form. Compression of various sorts exploits different ways of making the primitive/entailed distinction.

When you think of intelligence in relation to information though, you have to get more sophisticated…We think in terms of whether or not new data patterns require new categories, or simply modify the strengths of, and interrelationships among, existing ones…are you looking at something new, or is this just a variant or instance of something you already know?

Information for an intelligent system them, is best understood in an ontological novelty way rather than an information-theoretic way. Because it is not as fundamental an abstraction level, it is more arbitrary, which means how you factor your experience stream into categories is as much a function of the idiosyncrasies of your survival mode as it is a function of the bits-and-bytes level structure of what you’re experiencing…the models are making up weird compressions that are not human-understandable. That’s what information is to an intelligent system: efficient ontologies that reflect how that particular intelligence is situated in its environment.

Or to put it more plainly: information is the hallucinations an intelligence makes up to categorize reality compactly, in order to survive efficiently.

The hallucinations that LLMs make up — papers and citations that don’t exist, weird words that show up in images, strange new symbols, inscrutable compressions — are neither errors in truth-seeking, nor evidence of weird kinds of magic going on. It is just raw information (in the bits and bytes Shannon sense) that has been judged for salience to the basic challenge of survival, and represented accordingly, in evolving memory. It is ontologically compressed experiential data. It’s just extra weird because the models are evolving in a trivial non-dangerous environment. So distinctions between robustly real categories and fragile hallucinations don’t matter. An academic can get fired for making up a citation. An LLM just gets wonder and mystique.

This is why information comes after memory in my physics of intelligence. Your memories are a function of a real-time danger management behavior. Your emergent categories of cognition did not evolve to be true in any rigorous sense. They evolved to help you survive. If someone who names and classifies certain data patterns as “ghosts” survives better than someone who names and classifies them “paranoid projections,” that’s how it goes. Truth is, in a evolutionary sense, the most robust representation of experience that helps you survive efficiently in the most challenging environments.

Models that describe reality in a broad and interesting way emerge when the danger environment tests you enough that you can’t bullshit to survive, but not so much that you must bullshit to survive. Both disconnected daydreams and psychotic breaks are poor foundations for existing in reality.

Spacetime, Matter, and Energy NOTE:  I am not including clips from this section in this post.
 

Here is ChatGPT 3.5's response to my request to summarize the above in 1000 words:

The article discusses the importance of including a physics discourse in the field of AI. The author proposes six basic questions about the physics of intelligence: attention, memory, information, spacetime, matter, and energy and thermodynamics. The concept of attention is related to position encoding of a data stream, and while it has some similarities between human and artificial systems, the differences are significant. Human attention is about energy management and regulation, and it is multifaceted. Memory is a curated record of everything a person has paid attention to in the past, but it is also modulated by evolving risk perceptions and emotions triggered by existing memories. Biological memory is strongly opinionated memory, and the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. The author suggests that biological memories are Large Danger Models (LDMs) and that danger is all that is needed to generate human-like memory and experience of time.

And here is ChatGPT 4's response, slightly edited to remove one obvious glitch:

The Physics of Intelligence: The Missing Discourse of AI

The discourse on artificial intelligence (AI) is currently dominated by philosophy and engineering but lacks a physics perspective. To truly understand intelligence, it is necessary to consider its physical aspects, such as spatial and temporal extent, materiality, and thermodynamics.

Six questions about the physics of intelligence are proposed:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?

Attention is fundamental to intelligence. In AI, attention refers to the position encoding of a data stream. While human and artificial attention mechanisms are similar, human attention is more complex and multifaceted, serving additional functions related to energy management, motivation, and emotion regulation.

Memory is the curated record of everything one has paid attention to in the past. Current AI models, such as LLMs (Language Models) and LIMs (Image Models), can be seen as compressed textual and visual memories. Human memory, however, is modulated by risk perceptions, emotions, and experiences. The idea of Large Danger Models (LDMs) is proposed, where memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices. The next generation of AI models may include embodiment feedback loops that link memory to embodied behaviors in a feedback loop exposed to environmental dangers, resulting in more capable AIs.

Intelligence is related to information through efficient ontologies that reflect how an intelligence is situated in its environment. For intelligent systems, information is best understood in terms of ontological novelty rather than information-theoretic ways. Information is the hallucinations an intelligence makes up to categorize reality compactly to survive efficiently. This is why information comes after memory in the physics of intelligence, as memories are a function of real-time danger management behavior. Truth, in an evolutionary sense, is the most robust representation of experience that helps an organism survive efficiently in challenging environments.

The physics of intelligence is not solely about computation but also involves understanding the physical aspects of intelligence, such as attention, memory, and information. Future AI models may need to incorporate embodiment feedback loops and risk modulation to develop more human-like intelligence.
 



Monday, February 06, 2023

Openness to spiritual experiences...with caution

After a Protestant Christian upbringing (I was a teenage organist in an Austin Lutheran Church, and took a course from theologian Paul Tillich at Harvard), my adult materialistic scientific Deric has never been able to fathom how an intellectual like Ross Douthat could be a devout Catholic. My irrational faith is in a materialism that is open to spiritual experiences and insights, but also strives to explain them in materialistic terms (as I think near-death experiences have been). I think Douthat’s recent opinion piece in the NYTimes very lucid, although I take exception to one of his pronouncements, and I would recommend that you read it. Here are some clips:
...the dissolution of the old order of American religion — the decline of churches and denominations and the rise of deinstitutionalized spirituality — means that more and more religious lives are lived in between worldviews, in experimental territory where it’s a mistake to expect coherence, theological consistency, a definite set of prior assumptions or beliefs...I want to defend the rationality of this kind of spiritual experimentation and then to warn about its dangers.
Douthat then offers three examples experimental style: magical thinking, experimenting with psychedelics, and pantheistic art that blurs spiritual traditions. And he continues:
For the stringent materialist, everything I’ve just described is reasonable as long as it's understood to be playacting, experience hunting, artistic experimentation. Only when it becomes serious does it offend against rationality.
However, stringent materialism is itself a weird late-modern superstition, and the kind of experimentation I’m describing is actually far more rational than a life lived as though the universe is random and indifferent and human beings are gene-transmission machines with an illusion of self-consciousness.
So... put me in the camp of irrational believers in stringent materialism. And... by what authority does Mr. Douthat get to declare spiritual experimentation or superstition is "far more rational than life lived as though the universe is random, etc." Superstition is superstition; irrational is irrational. What criteria are Mr. Douthat using for his "far more rational" judgment. Are they utilitarian?... as in "X diminishes or enhances the well being of humans more than Y"? He should explicitly state them.

Monday, January 23, 2023

Our different styles of thinking.

An interesting recent article by Joshua Rothman, the ideas editor of newyorker.com, notes several recent books that describe different styles of thinking. A few clips:
In “Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions,” Temple Grandin identifies a continuum of thought styles that’s roughly divisible into three sections. On one end are verbal thinkers, who often solve problems by talking about them in their heads or, more generally, by proceeding in the linear, representational fashion typical of language. (Estimating the cost of a building project, a verbal thinker might price out all the components, then sum them using a spreadsheet—an ordered, symbol-based approach.) On the other end of the continuum are “object visualizers”: they come to conclusions through the use of concrete, photograph-like mental images, as Grandin does when she compares building plans in her mind. In between those poles, Grandin writes, is a second group of visual thinkers—“spatial visualizers,” who seem to combine language and image, thinking in terms of visual patterns and abstractions.
Grandin proposes imagining a church steeple. Verbal people, she finds, often make a hash of this task, conjuring something like “two vague lines in an inverted V,” almost as though they’ve never seen a steeple before. Object visualizers, by contrast, describe specific steeples that they’ve observed on actual churches: they “might as well be staring at a photograph or photorealistic drawing” in their minds. Meanwhile, the spatial visualizers picture a kind of perfect but abstract steeple—“a generic New England-style steeple, an image they piece together from churches they’ve seen.” They have noticed patterns among church steeples, and they imagine the pattern, rather than any particular instance of it.
The imagistic minds in “Visual Thinking” can seem glamorous compared with the verbal ones depicted in “Chatter: The Voice in Our Head, Why It Matters, and How to Harness It,” by Ethan Kross. Kross is interested in what’s known as the phonological loop—a neural system, consisting of an “inner ear” and an “inner voice,” that serves as a “clearinghouse for everything related to words that occurs around us in the present.” If Grandin’s visual thinkers are attending Cirque du Soleil, then Kross’s verbal thinkers are stuck at an Off Broadway one-man show. It’s just one long monologue.
People with inner monologues, Kross reports, often spend “a considerable amount of time thinking about themselves, their minds gravitating toward their own experiences, emotions, desires, and needs.” This self-centeredness can spill over into our out-loud conversation. In the nineteen-eighties, the psychologist Bernard Rimé investigated what we’d now call venting—the compulsive sharing of negative thoughts with other people. Rimé found that bad experiences can inspire not only interior rumination but the urge to broadcast it. The more we share our unhappiness with others, the more we alienate them… Maybe it can pay to keep your thoughts to yourself.
Kross’s bottom line is that our inner voices are powerful tools that must be tamed. He ends his book with several dozen techniques for controlling our chatter. He advises trying “distanced self-talk”: by using “your name and the second-person ‘you’ to refer to yourself,” he writes, you can gain more command over your thinking. You might use your inner voice to pretend that you’re advising a friend about his problems; you might redirect your thoughts toward how universal your experiences are (It’s normal to feel this way), or contemplate how every new experience is a challenge you can overcome (I have to learn to trust my partner). The idea is to manage the voice that you use for self-management. Take advantage of the suppleness of dialogue. Don’t just rehearse the same old scripts; send some notes to the writers’ room.
If we can’t say exactly how we think, then how well do we know ourselves? In an essay titled “The Self as a Center of Narrative Gravity,” the philosopher Daniel Dennett argued that a layer of fiction is woven into what it is to be human. In a sense, fiction is flawed: it’s not true. But, when we open a novel, we don’t hurl it to the ground in disgust, declaring that it’s all made-up nonsense; we understand that being made up is actually the point. Fiction, Dennett writes, has a deliberately “indeterminate” status: it’s true, but only on its own terms. The same goes for our minds. We have all sorts of inner experiences, and we live through and describe them in different ways—telling one another about our dreams, recalling our thoughts, and so on. Are our descriptions and experiences true or fictionalized? Does it matter? It’s all part of the story.

Friday, January 20, 2023

A quick MindBlog riff on what a self is....

Spilling out what I was thinking at breakfast this morning, deciding to fire it off, probably incomprehensible to most readers, perhaps to attempt to clarify later (or delete!):

My self model or 'I' stands amongst my models of others, and during my early postnatal period I probably formed those other models prior to my own, a yet unknown self discovering and learning to predict the behavior of others to gain feeding and care, and only then composing my own self from parts of them. This is consonant with Graziano's take on consciousness as a perceptual construct ("Consciousness and the Social Brain"), also with Wegner's self as being the emotion of authorship ("The Illusion of Conscious Will") and with Metzenger's emotions as evolved virtual organs analogous to the hardware of the liver or kidney ("The Ego Tunnel"). Perhaps the closest we come to a 'real fundamental self' is the experience of occupancy of an expanded space (of non-dual consciousness) that feels to be the container for all of this.

Monday, December 12, 2022

Autism attenuates the perception of the mind-body divide

A fascinating piece of work from Berent et al.:  

Significance

Across cultures, people consider the mind as ethereal, distinct from the body. But whether Dualism arises only from culture (nurture) or also spontaneously (from human nature) is unknown. To address this question, here, we turn to autism spectrum disorder (ASD)—a congenital disorder that compromises intuitive reasoning about the minds of others (theory of mind, ToM). If ToM promotes Dualist reasoning, then Dualist reasoning ought to be attenuated in ASD. Our results show that, compared to controls, people with ASD are more likely to view bodies and minds alike (in line with Physicalism). Moreover, a Physicalist stance is linked to difficulties with ToM. These results shed light on ASD and on the mind-body distinction in humans.
Abstract
People are intuitive Dualists—they tacitly consider the mind as ethereal, distinct from the body. Here we ask whether Dualism emerges naturally from the conflicting core principles that guide reasoning about objects, on the one hand, and about the minds of agents (theory of mind, ToM), on the other. To address this question, we explore Dualist reasoning in autism spectrum disorder (ASD)—a congenital disorder known to compromise ToM. If Dualism arises from ToM, then ASD ought to attenuate Dualism and promote Physicalism. In line with this prediction, Experiment 1 shows that, compared to controls, people with ASD are more likely to view psychological traits as embodied—as likely to manifest in a replica of one’s body. Experiment 2 demonstrates that, unlike controls, people with ASD do not consider thoughts as disembodied—as persistent in the afterlife (upon the body’s demise). If ASD promotes the perception of the psyche as embodied, and if (per Essentialism) embodiment suggests innateness, then ASD should further promote Nativism—this bias is shown in Experiment 3. Finally, Experiment 4 demonstrates that, in neurotypical (NT) participants, difficulties with ToM correlate with Physicalism. These results are the first to show that ASD attenuates Dualist reasoning and to link Dualism to ToM. These conclusions suggest that the mind-body distinction might be natural for people to entertain.

Monday, November 28, 2022

The Computational Society

The most recent issue of Trends in Cognitive Sciences presents a forum in its 25th Anniversary Series: Looking Forward. Several of the contribution are open source (you can email me to request access to those that are not), and I would like to point to Nick Charter's brief article "The computational society," passing on his initial and final comments. I suggest you read through his descriptions of what he thinks are four promising lines of work.
How do individual human minds create languages, legal systems, scientific theories, and technologies? From a cognitive science viewpoint, such collective phenomena may be considered a type of distributed computation in which human minds together solve computational problems beyond any individual. This viewpoint may also shift our perspective on individual minds.
To make the computational society more than a metaphor, we need conceptual tools and methods to understand social phenomena in information-processing terms. Fortunately, several different, yet complementary, approaches have emerged in recent years. Here I highlight four promising lines of work: (i) social interaction as computation, (ii) the computational Leviathan, (iii) collective self-correction and rationality, and (iv) computation through spontaneous order.
Cognitive science may stand on the brink of a new revolution, seeing social, organizational, and cultural processes as distributed computation. If so, we will need to look afresh at the computational role of individual minds. For example, rather than seeing each developing child as a lone minilinguist or a scientist-in-the-crib, we may, following Adam Ferguson, see humans as primarily learning to contribute to collective computations beyond the understanding of individual understanding.

Wednesday, November 16, 2022

The neurophysiology of consciousness - neural correlates of qualia

This is a post for consciousness mavens.Tucker, Luu, and Johnson have offered a neurophyiological model of consciousness, Neurophysiological mechanisms of implicit and explicit memory in the process of consciousness. The open source article has useful summary graphics, and embraces the 'Hard Problem' of consciousness - the nature of 'qualia' (how it feels to see red, eat an apple, etc.) Here I pass on brief, and then more lengthy, paragraphs on what the authors think is new and noteworthy about their ideas.
The process of consciousness, generating the qualia that may appear to be irreducible qualities of experience, can be understood to arise from neurophysiological mechanisms of memory. Implicit memory, organized by the lemnothalamic brain stem projections and dorsal limbic consolidation in REM sleep, supports the unconscious field and the quasi-conscious fringe of current awareness. Explicit memory, organized by the collothalamic midbrain projections and ventral limbic consolidation of NREM sleep, supports the focal objects of consciousness.
Neurophysiological mechanisms are increasingly understood to constitute the foundations of human conscious experience. These include the capacity for ongoing memory, achieved through a hierarchy of reentrant cross-laminar connections across limbic, heteromodal, unimodal, and primary cortices. The neurophysiological mechanisms of consciousness also include the capacity for volitional direction of attention to the ongoing cognitive process, through a reentrant fronto-thalamo-cortical network regulation of the inhibitory thalamic reticular nucleus. More elusive is the way that discrete objects of subjective experience, such as the color of deep blue or the sound of middle C, could be generated by neural mechanisms. Explaining such ineffable qualities of subjective experience is what Chalmers has called “the hard problem of consciousness,” which has divided modern neuroscientists and philosophers alike. We propose that insight into the appearance of the hard problem can be gained through integrating classical phenomenological studies of experience with recent progress in the differential neurophysiology of consolidating explicit versus implicit memory. Although the achievement of consciousness, once it is reflected upon, becomes explicit, the underlying process of generating consciousness, through neurophysiological mechanisms, is largely implicit. Studying the neurophysiological mechanisms of adaptive implicit memory, including brain stem, limbic, and thalamic regulation of neocortical representations, may lead to a more extended phenomenological understanding of both the neurophysiological process and the subjective experience of consciousness.

Saturday, October 22, 2022

New Perspectives on how our Minds Work

I want to pass on to MindBlog readers this link to a lecture I gave this past Friday (10/21/22) to the Univ. of Texas OLLI (Osher Lifelong Learning Institute) UT FORUM group on Oct. 21, 2022. Here is the brief description of the talk:  

Abstract

Recent research shows that much of what we thought we knew about how our minds work is wrong. Rather than rising from our essential natures, our emotional and social realities are mainly invented by each of us. Modern and ancient perspectives allow us to have some insight into what we have made.
Description
This talk offers a description of how our predictive brains work to generate our perceptions, actions, emotions, concepts, language, and social structures. Our experience that a self or "I" inside our heads is responsible for these behaviors is a useful illusion, but there is in fact no homunculus or discrete place inside our heads where “It all comes together.” Starting before we are born diffuse networks of brain cells begin generating actions and perceiving their consequences to build an internal library of sensing and acting correlations that keep us alive and well, a library that is the source of predictions about what we might expect to happen next in our worlds. Insights from both modern neuroscience research and ancient meditative traditions allow us to partially access and sometimes change this archive that manages our behaviors.

Monday, September 12, 2022

Non-duality as a platform for experiencing daily life.

I pass on abstracted edited clip from a recent podcast conversation between Sam Harris and M.I.T philosopher Kierna Setiya on “Philosophy and the Good Life” which has a succinct definition of what Harris takes non-duality or ‘having no self’ to mean, and how experiencing this can lead to a surprising kind of equanimity and even eudemonia, as well as solving a very wide class of psychological problems. I suggest that MindBlog readers who enjoy this subject matter also have a look back at my March 22 post, titled "Points on having a self and free will."
There can be confusion over what is meant by no-self in various meditative traditions. It’s not the claim that people are illusions, or that you can’t say anything about the psychological or biological continuity of a person. It’s not mysterious that we wake up today as ourselves and not someone else. These are not the puzzles being addressed.
The core insight, the illusion that is cut through, conceptually and experientially, is our apparent normal default condition of feeling like there is a subject in the center of experience. Most people don’t merely feel identical to experience, they feel like they are having an experience, they feel like they are appropriating their experience from some point - very likely in their heads - the witness, the thinker of thoughts, the willer of will, the guy in the boat who has free will, who can decide what to think and do next. That’s the default state for almost everybody, and commonplace as it is, it is a peculiar point of view. It doesn’t make a lot of sense, particularly biological sense. It’s not the same thing as feeling identical to our bodies, because we usually don’t feel identical to our bodies, we feel like we are subjects who have bodies, there is a kind of Cartesian dualism that is intuitive. People are ‘common sense dualists.’ As a matter of experience there is this feeling that “I am a subject behind my face.” “I” have a body, am a subject who is thinking, internal to the body, has a body. It is the final representation of the subject that is the illusion.
To put this in neurological terms, let’s just say for the sake of argument at all of this is just neurophysiological events in the brain delivering these representations. It is plausible that any one of these processes can be interrupted, so that you can cease to faithfully or coherently represent a world. You can suddenly go blind, may not be able to name living things but still be able to name tools, suddenly not be able to perceive motion or location, those things can break apart. All kinds of things can be disrupted for the worse certainly. But what these contemplative traditions have recognized is that certain things can be disrupted or brought to a halt for the better. The thing that can interrupt the usual cascade of mediocrity and suffering psychologically speaking is this representation of self as subject in the middle of experience.
You can cease to construct a subject that is internal to the body. What remains in that case is a sense that mind is much vaster than it was a moment ago, because it is no longer confined to the sense that there is this central thinker of thoughts. There is a recognition that thoughts arise all by themselves, just as sounds do, no one is authoring your thoughts - you certainly aren’t. The sense that you are is what it feels like to be identified with this next spontaneously arising thought.
So, you loose sense that you are on the edge of your life, looking over your own shoulder, appropriating experience and what you can feel very vividly here is a real unity. emptyness, non-duality of subjects and objects, such that there is really just experience. This is not a new way of thinking about yourself in the world, this is a ceasing to identify with thought. This is making no metaphysical claims about how this relates to matter or the universe.
As as matter of experience you can feel identical to experience itself. You are not standing on the river bank watching things go by, you are the river, and that solves a very wide class of problems, psychologically speaking, with respect to suffering. And, it does land one in a surprising kind of equanimity and even eudemonia (well being) that may seem counter intuitive in the midst of the cacophony of daily life. But again, it’s not about the negation of personhood, it is just a recognition that as a matter of experience there is just experience, and the feeling that there is an experiencer is yet more experience, so that if you just drop back… there is just everything in its own place.

Wednesday, August 17, 2022

Alcohol, neuronal plasticity, and mitochondrial trafficking

Hernandez and Kaun provide a nice description of work by Knabbe et al. with summary graphics. Here is the start of their text:
Consumption of alcohol creates a sense of euphoria, reduces inhibition, and increases sociability and impulsivity. The age at which alcohol is first experienced is a key factor contributing to the likelihood to misuse alcohol. However, the impacts of the first experience of alcohol on the molecules in the brain at these key developmental stages are not well understood. Knabbe et al. endeavored to address the neuromolecular alterations resulting from acute alcohol by combining hippocampal proteomics with somatosensory and motor cortex protein, dendrite, axon, and mitochondrial analysis in adolescent mice. Evidence from this array of preparations led to the hypothesis that alcohol disrupted mitochondrial trafficking, and using Drosophila they demonstrated a functional role for mitochondrial trafficking in cue-induced alcohol preference.
The cross-assay and cross-species approach outlined in Knabbe et al. proved to be an effective way of discovering how alcohol hijacks brain mechanisms. Animals from flies to humans maintain functionally consistent neurotransmitter systems, neural circuit mechanisms, and molecular pathways underlying reward.
And here is the abstract from Knabbe et al.:
Alcohol intoxication at early ages is a risk factor for the development of addictive behavior. To uncover neuronal molecular correlates of acute ethanol intoxication, we used stable-isotope-labeled mice combined with quantitative mass spectrometry to screen more than 2,000 hippocampal proteins, of which 72 changed synaptic abundance up to twofold after ethanol exposure. Among those were mitochondrial proteins and proteins important for neuronal morphology, including MAP6 and ankyrin-G. Based on these candidate proteins, we found acute and lasting molecular, cellular, and behavioral changes following a single intoxication in alcohol-naïve mice. Immunofluorescence analysis revealed a shortening of axon initial segments. Longitudinal two-photon in vivo imaging showed increased synaptic dynamics and mitochondrial trafficking in axons. Knockdown of mitochondrial trafficking in dopaminergic neurons abolished conditioned alcohol preference in Drosophila flies. This study introduces mitochondrial trafficking as a process implicated in reward learning and highlights the potential of high-resolution proteomics to identify cellular mechanisms relevant for addictive behavior.

Monday, August 15, 2022

A systematic review of microdosing - research on low dose psychedelics

I pass on the link to this review by Polito and Liknaitzky. Their abstract:
The use of low doses of psychedelic substances (microdosing) is attracting increasing interest. This systematic review summarises all empirical microdosing research to date, including a set of infrequently cited studies that took place prior to prohibition. Specifically, we reviewed 44 studies published between 1955 and 2021, and summarised reported effects across six categories: mood and mental health; wellbeing and attitude; cognition and creativity; personality; changes in conscious state; and neurobiology and physiology. Studies showed a wide range in risk of bias, depending on design, age, and other study characteristics. Laboratory studies found changes in pain perception, time perception, conscious state, and neurophysiology. Self-report studies found changes in cognitive processing and mental health. We review data related to expectation and placebo effects, but argue that claims that microdosing effects are largely due to expectancy are premature and possibly wrong. In addition, we attempt to clarify definitional inconsistencies in the microdosing literature by providing suggested dose ranges across different substances. Finally, we provide specific design suggestions to facilitate more rigorous future research.

Wednesday, June 29, 2022

Theories of consciousness

A valuable summary article from Anil Seth and Time Bayne. The article has excelent graphics demonstrating the four main models of consciousness that are the focus of most current attention and research. Motivated readers can request a copy of the whole article from me.  

Abstract

Recent years have seen a blossoming of theories about the biological and physical basis of consciousness. Good theories guide empirical research, allowing us to interpret data, develop new experimental techniques and expand our capacity to manipulate the phenomenon of interest. Indeed, it is only when couched in terms of a theory that empirical discoveries can ultimately deliver a satisfying understanding of a phenomenon. However, in the case of consciousness, it is unclear how current theories relate to each other, or whether they can be empirically distinguished. To clarify this complicated landscape, we review four prominent theoretical approaches to consciousness: higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory. We describe the key characteristics of each approach by identifying which aspects of consciousness they propose to explain, what their neurobiological commitments are and what empirical data are adduced in their support. We consider how some prominent empirical debates might distinguish among these theories, and we outline three ways in which theories need to be developed to deliver a mature regimen of theory-testing in the neuroscience of consciousness. There are good reasons to think that the iterative development, testing and comparison of theories of consciousness will lead to a deeper understanding of this most profound of mysteries.

Friday, June 10, 2022

The Conscious Turing Machine - a blueprint for conscious machines.

I want to point to a paper in the current PNAS by Blum and Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," as well as a copmmentary on it by Oliveira. I do this before diving in to read it and hopefully understand it myself, to alert consciousness mavens of its appearance. A first glance through it makes me think that getting a grip on understanding the model will take considerable effort on my part. Perhaps I will emerge with some commentary, perhaps not.... I pass on the Blum and Blum opening statements:  

Significance

This paper provides evidence that a theoretical computer science (TCS) perspective can add to our understanding of consciousness by providing a simple framework for employing tools from computational complexity theory and machine learning. Just as the Turing machine is a simple model to define and explore computation, the Conscious Turing Machine (CTM) is a simple model to define and explore consciousness (and related concepts). The CTM is not a model of the brain or cognition, nor is it intended to be, but a simple substrate-independent computational model of (the admittedly complex concept of) consciousness. This paper is intended to introduce this approach, show its possibilities, and stimulate research in consciousness from a TCS perspective.
Abstract
This paper examines consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature.

Monday, May 09, 2022

Graziano's conceptual framework for consciousness

I would like to pass on this link to Graziano's latest (open source) explication of his theory of consciousness, continuing a MindBlog thread that started with a 2014 post on his 2013 book "Consciousness and the Social Brain." Here is his abstact:
This article argues that consciousness has a logically sound, explanatory framework, different from typical accounts that suffer from hidden mysticism. The article has three main parts. The first describes background principles concerning information processing in the brain, from which one can deduce a general, rational framework for explaining consciousness. The second part describes a specific theory that embodies those background principles, the Attention Schema Theory. In the past several years, a growing body of experimental evidence—behavioral evidence, brain imaging evidence, and computational modeling—has addressed aspects of the theory. The final part discusses the evolution of consciousness. By emphasizing the specific role of consciousness in cognition and behavior, the present approach leads to a proposed account of how consciousness may have evolved over millions of years, from fish to humans. The goal of this article is to present a comprehensive, overarching framework in which we can understand scientifically what consciousness is and what key adaptive roles it plays in brain function.
The article is worth a read, and here is Graziano's bottom line, from the last paragraph of his article:
If you start your search for consciousness by assuming the existence of a subjective feeling—a private component that cannot be measured and can only be felt and attested to, experienceness itself—then you are assuming the literal accuracy of an internal model. By principle 1, your conviction that you have consciousness depends on an information set in your brain. By principle 2, the brain’s models are never accurate. You have accepted the literal truth of a caricature, and you will never find the answer to your ill-posed question. When the police draw a sketch of a suspect, and you set yourself the task of finding a flat man made of graphite, you will fail. Yet at the same time, if you take the opposite approach and insist that the sketch is an empty illusion, you are missing the point. Instead, understand the sketch for what it is: a schematic representation of something real. We can explain physical processes in the brain; we can explain the models constructed by the brain to represent those physical processes; we can explain the way those models depict reality in a schematic, imperfect manner; we can explain the cognitive beliefs that stem from those imperfect models; and most importantly, we can explain the adaptive, cognitive benefits served by those models. AST is not just a theory of consciousness. It is a theory of adaptive mechanisms in the brain.

Wednesday, April 06, 2022

The Science of Consciousness - Tucson 2022

For junkies of consciousness and altered states thereof, as in anesthesia or psychedelic-induced states, a browsing of the abstracts submitted for The Science of Consciousness 2022 in Tucson April 18-22 makes for stimulating reading (though I found pauses to recover from input overload to be necessary.) I attended the first “Towards a Science of Consciousness” - now known as TSC “The Science of Consciousness” - in 1994, as well as a number of subsequent meetings, and it was encouragement from several then luminaries in the field that emboldened me to turn the lecture notes from my “Biology of Mind” course at the University of Wisconsin into a book of that title which was published in 1999. There are numerous bon-bons to be found in this year's set of abstracts, one example from many being Huang’s noting of his work on how the anterior insula, situated between unimodal and transmodal cortical areas along the brain’s primary functional gradient, regulates the default mode – dorsal attention network transitions, and gates conscious access of sensory information.

Monday, February 21, 2022

Consciousness is supported by near-critical slow cortical electrodynamics

An open source offering from Toker et al. that has a nice summary graphic.:  

Significance

What changes in the brain when we lose consciousness? One possibility is that the loss of consciousness corresponds to a transition of the brain’s electric activity away from edge-of-chaos criticality, or the knife’s edge in between stability and chaos. Recent mathematical developments have produced tools for testing this hypothesis, which we apply to cortical recordings from diverse brain states. We show that the electric activity of the cortex is indeed poised near the boundary between stability and chaos during conscious states and transitions away from this boundary during unconsciousness and that this transition disrupts cortical information processing.
Abstract
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.

Monday, December 06, 2021

The Science of Mind Reading

James Somers offers a fascinating article in the Nov. 29 issue of The New Yorker, which I recommend that you read. It describes the development of the technique of Latent Semantic Analysis (L.S.A) originating in the work of a psychologist named Charles Osgood nearly 70 years ago and now being applied to the analysis of fMRI recordings from people to infer what they are internally thinking or seeing.
In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail. Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things. Today’s facial-recognition systems have dimensions that represent the length of the nose and the curl of the lips, and faces are described using a string of coördinates in “face space.” Chess A.I.s use a similar trick to “vectorize” positions on the board. The technique has become so central to the field of artificial intelligence that, in 2017, a new, hundred-and-thirty-five-million-dollar A.I. research center in Toronto was named the Vector Institute. Matthew Botvinick, a professor at Princeton whose lab was across the hall from Norman’s, and who is now the head of neuroscience at DeepMind, Alphabet’s A.I. subsidiary, told me that distilling relevant similarities and differences into vectors was “the secret sauce underlying all of these A.I. advances.”

 


Subsequent sections of the article describe how machine learning has been brought to brain imaging with voxels of neural activity serving as dimensions in a kind of thought space.

...today’s thought-decoding researchers mostly look for specific thoughts that have been defined in advance. But a “general-purpose thought decoder,” Norman told me, is the next logical step for the research. Such a device could speak aloud a person’s thoughts, even if those thoughts have never been observed in an fMRI machine. In 2018, Botvinick, Norman’s hall mate, co-wrote a paper in the journal Nature Communications titled “Toward a Universal Decoder of Linguistic Meaning from Brain Activation.” Botvinick’s team had built a primitive form of what Norman described: a system that could decode novel sentences that subjects read silently to themselves. The system learned which brain patterns were evoked by certain words, and used that knowledge to guess which words were implied by the new patterns it encountered.

Monday, November 22, 2021

Fluid intelligence and the locus coeruleus-norepinephrine system

Tsukahara and Engle suggest that the cognitive mechanisms of fluid intelligence map onto the locus coeruleus–norepinephrine system. I pass on their introductory paragraph (the link takes you to their abstract, which I think is less informative):
In this article, we outline what we see as a potentially important relationship for understanding the biological basis of intelligence: that is, the relationship between fluid intelligence and the locus coeruleus–norepinephrine system. This is largely motivated by our findings that baseline pupil size is related to fluid intelligence; the larger the pupils, the higher the fluid intelligence. The connection to the locus coeruleus is based on research showing that the size of the pupil can be used as an indicator of locus coeruleus activity. A large body of research on the locus coeruleus–norepinephrine system in animal and human studies has shown how this system is critical for an impressively wide range of behaviors and cognitive processes, from regulating sleep/wake cycles, to sensation and perception, attention, learning and memory, decision making, and more. The locus coeruleus–norepinephrine system achieves this primarily through its widespread projection system throughout the cortex, strong connections with the prefrontal cortex, and the effect of norepinephrine at many levels of brain function. Given the broad role of this system in behavior, cognition, and brain function, we propose that the locus coeruleus–norepinephrine system is essential for understanding the biological basis of intelligence.

Friday, November 12, 2021

Freedom From Illusion

A friend who attended the lecture I gave last Sunday (A New Vision of how our Minds Work), and mentioned in a Monday post, sent me an article from The Buddhist Review "TRICYCLE" by Pema Düddul titled "Freedom From Illusion". If you scan both texts, I suspect you will find, as I do, a striking consonance between the neuroscientific and Buddhist perspectives on "Illusion." 

From the beginning of the Düddul article:

A shooting star, a clouding of the sight, 
a lamp, an illusion, a drop of dew, a bubble, 
a dream, a lightning’s flash, a thunder cloud: 
this is the way one should see the conditioned.
This revered verse from the Diamond Sutra points to one of Buddhism’s most profound yet confounding truths—the illusory nature of all things. The verse is designed to awaken us to ultimate reality, specifically to the fact that all things, especially thoughts and feelings, are the rainbow-like display of the mind. One of the Tibetan words for the dualistic mind means something like “a magician creating illusions.” As my teacher Ngakpa Karma Lhundup Rinpoche explained: “All of our thoughts are magical illusions created by our mind. We get trapped, carried away by our own illusions. We forget that we are the magician in the first place!”
Compare this with my talk's description of predictive processing, and how what we see, hear, touch, taste, and smell are largely simulations or illusions about the world. Here is a summary sentence in one of my slides, taken from a lecture by Ruben Laukkonen, in which I replace his last word, 'fantasies,' with the word 'illusions.'
Everything we do and experience is in service of reducing surprises by fulfilling illusions.

Monday, November 01, 2021

What the mind is - similarities and differences in concepts of mental life in five cultures

From Weisman et al., who do a fascinating study of cognitive structures 'from the bottom up', allowing data to give rise to ontological structures, rather than working 'from the top down' by using a theory to guide hypothesis-driven data collection. :
How do concepts of mental life vary across cultures? By asking simple questions about humans, animals and other entities – for example, ‘Do beetles get hungry? Remember things? Feel love?’ – we reconstructed concepts of mental life from the bottom up among adults (N = 711) and children (ages 6–12 years, N = 693) in the USA, Ghana, Thailand, China and Vanuatu. This revealed a cross-cultural and developmental continuity: in all sites, among both adults and children, cognitive abilities travelled separately from bodily sensations, suggesting that a mind–body distinction is common across diverse cultures and present by middle childhood. Yet there were substantial cultural and developmental differences in the status of social–emotional abilities – as part of the body, part of the mind or a third category unto themselves. Such differences may have far-reaching social consequences, whereas the similarities identify aspects of human understanding that may be universal.