Showing posts with label attention/perception. Show all posts
Showing posts with label attention/perception. Show all posts

Monday, March 18, 2024

The Physics of Non-duality

 I want to pass on this lucid posting by "Sean L" of Boston to the community.wakingup.com site:

The physics of nonduality

In this context I mean “nonduality” as it refers to there being no real subject-object separation or duality. One of the implications of physics that originally led me to investigate notions of “awakening” and “no-self” is the idea that there aren’t really any separate objects. We can form very self-consistent and useful concepts of objects (a car, an atom, a self, a city), but from the perspective of the universe itself such objects don’t actually exist as well-defined independent “things.” All that’s real is the universe as one giant, self-interacting, dynamic, but ultimately singular “thing.” If you try to partition off one part of the universe (a self, a galaxy) from the rest, you’ll find that you can’t actually do so in a physically meaningful way (and certainly not one that persists over time). All parts of the universe constantly interact with their local environment, exchanging mass and energy. Objectively, physics says that all points in spacetime are characterized by values of different types of fields and that’s all there is to it. Analogy: you might see this word -->self<-- on your computer screen and think of it as an object, but really it’s just a pattern of independent adjacent pixel values that you’re mapping to a concept. There is no objectively physically real “thing” that is the word self (just a well defined and useful concept). 
 

This is akin to the idea that few if any of the cells making up your body now are the same as when you were younger. Or the idea that the exact pattern you consider to be “you” in this moment will not be numerically identical to what you consider “you” in the next picosecond. Or the idea that there is nothing that I could draw a closed physical boundary around that perfectly encloses the essence of “you” such that excluding anything inside that boundary means it would no longer contain “you” and including anything currently outside the boundary would mean it contains more than just “you.” This is true even if you try to draw the boundary around only your brain or just specific parts of your brain. I think this is a fun philosophical idea, but also one that often gets the response of “ok, yeah, sure I guess that’s all logically consistent,” but then still feels esoteric. It often feels like it’s just semantics, or splitting hairs, or somehow not something that really threatens the idea of identity or of a physical self. 
 

I was recently discussing with another WakingUp user that what made this notion far more visceral and convincing to me (enough to motivate me to go out in search of what “I” actually am, which has ultimately led me here) was realizing that even the very idea of trying to draw a boundary around a “thing” is objectively meaningless. So, I thought I’d share what I mean by that in case others find it interesting too :) !

Here are four pictures. The two on the left are pictures of very simple boundaries of varying thickness that one might try to draw around a singular “thing” (perhaps a self?) to demonstrate that it is indeed a well defined object. The two pictures on the right are of the exact same “boundaries” as on the left, but as they would be seen by a creature that evolved to perceive reality in the momentum basis. I’ll better explain what that means in a moment, but the key point is that the pictures on the left and right are (as far as physics or objective reality is concerned) exactly equivalent representations of the same part of reality. Both sets of pictures are perceptual “pointers” to the same part of the universe. You literally cannot say that one is a more veridical or more accurate depiction of reality than the other, because they are equivalent mathematical descriptions of the same underlying objective structure. Humans just happen to have a bias toward the left images.
 

So then… what could these “boundaries” be enclosing in the pictures on the right? I sure can’t tell. Nor do I think it even makes sense to ask the question! Our sense that there are discrete “objects” in the universe (including selves) seems intuitive when perceiving the universe as shown on the left (as we do). But when perceiving the exact same reality as shown on the right I find this belief very quickly breaks down. There simply is no singular, bounded, contained “thing” on the right. Anything that might at first appear on the left to be a separable object will be completely mixed up with and inseparable from its “surroundings” when viewed on the right, and vice-versa. The boundary itself clearly isn’t even a boundary. Boundaries are (very useful!) concepts, but they have no ultimate objective physical meaning.

----------------------

Some technical details for those interested (ignore this unless interested):

You can think of a basis like a fancy coordinate system. Analogy: I can define the positions of all the billiard balls on a pool table by defining an XY coordinate system on the table and listing the numerical coordinates of each ball. But if I stretch and/or rotate that coordinate system then all the numbers representing those same positions of the balls will change. The balls themselves haven’t changed positions, but my coordinate system-dependent “perception” of the balls is totally different. They're different ways of perceiving the same fundamental structure (billiard ball positions), even though that structure itself exists independently of any coordinate system. The left and right images are analogous to different coordinate systems, but in a more fundamental way.

 

In quantum mechanics the correct description of reality is the wave function. For an entangled system of particles you don’t have a separate wave function for each particle. Instead, you have one multi-particle wave function for the whole system (strictly speaking one could argue the entire universe is most correctly described as a single giant and stupidly complicated wave function). There is no way to break up this wave function into separate single-particle wave functions (that’s what it means for the particles to be entangled). What that means is from the perspective of true reality, there literally aren’t separate distinct particles. That’s not just an interpretation – it’s a testable (falsifiable) statement of physical reality and one that has been extensively verified experimentally. So, if you think of yourself as a distinct object, or at least as a dynamical evolving system of interacting (but themselves distinct) particles, sorry, but that’s simply not how we understand reality to actually work :P .
 

However, to do anything useful we have to write down the wave function (e.g. so we can do calculations with it). We have to represent it mathematically. This requires choosing a basis in which to write it down, much like choosing a coordinate system with which to be able to write down the numerical positions of the billiard balls. A human-intuitive basis is the position basis, which is what’s shown in the left images. However, a completely equivalent way to write down the same wave function is in the momentum basis, which is what’s shown in the right images. There also exist many (really, infinite) other possible bases. Some bases will be more convenient than others depending on the type of calculation you’re trying to do. Ultimately, all bases are arbitrary and none are objectively real, because the universe doesn’t need to “write down” a wave function to compute it. The universe just is. To me, the equivalent representation of the same underlying reality in an infinite diversity of possible Hilbert Spaces (i.e. using different bases) much more viscerally drives home the point that there really are no separate objects (including selves). That’s not just philosophy! There’s just one objective reality (one thing, no duality) that can be perceived in an infinite variety of ways, each with different pros and cons. And our way of perceiving reality lends itself to concepts of separate things and objects.

 

There are other parts of physics I didn’t get into here that I think demonstrate that the true nature of the universe must be nondual (maybe to be discussed later). For example, the lack of room for free will or the indistinguishability of particles. If you actually read this whole post, thanks for your time and attention, and I hope you found it as interesting as I do!

Friday, February 23, 2024

Using caffeine to induce flow states

I pass on this link to an article in Neuroscience & Biobehavioral Reviews (open access) and show the Highlights and Abstract of the article below.  One of the coauthors, Steven Kotler, who is executive director at the "Flow Research Collective" was mentioned in my previous 2019 Mind Blog post "A Schism in Flow-land? Flow Genome Project vs. Flow Research Collective."  It was the last in a series of critical posts that started in 2017. While I agree from my personal experience that caffeine (as well other common stimulants) can induce more immersion in and focus on a task, I find find the text, which has a bloated unselective bibliography, to be mind-numbing gibble-gabble, just as were the writing efforts I was reviewing in 2017-2019,  However,  the authors do offer a recitation that some will find useful of "psychological and biological effects of caffeine that, conceptually, enhance flow" - whatever that means.

Highlights

-Caffeine promotes motivation (‘wanting’) and lowers effort aversion, thus facilitating flow.
-Caffeine boosts flow by increasing parasympathetic high frequency heart rate variability.
-Striatal endocannabinoid modulation by caffeine improves stress tolerance and flow.
-Chronic caffeine alters network activity, resulting in greater alertness and flow.
-Caffeine re-wires the dopamine reward system in ADHD for better attention and flow.

 Abstract

Flow is an intrinsically rewarding state characterised by positive affect and total task absorption. Because cognitive and physical performance are optimal in flow, chemical means to facilitate this state are appealing. Caffeine, a non-selective adenosine receptor antagonist, has been emphasized as a potential flow-inducer. Thus, we review the psychological and biological effects of caffeine that, conceptually, enhance flow. Caffeine may facilitate flow through various effects, including: i) upregulation of dopamine D1/D2 receptor affinity in reward-associated brain areas, leading to greater energetic arousal and ‘wanting’; ii) protection of dopaminergic neurons; iii) increases in norepinephrine release and alertness, which offset sleep-deprivation and hypoarousal; iv) heightening of parasympathetic high frequency heart rate variability, resulting in improved cortical stress appraisal, v) modification of striatal endocannabinoid-CB1 receptor-signalling, leading to enhanced stress tolerance; and vi) changes in brain network activity in favour of executive function and flow. We also discuss the application of caffeine to treat attention deficit hyperactivity disorder and caveats. We hope to inspire studies assessing the use of caffeine to induce flow.

Monday, February 19, 2024

Comparing how generative AI and living organisms generate meaning suggests future direction for AI development

I want to pass on this open source opinion article in Trends in Cognitive Sciences by Karl Friston, Andy Clark, and other prominent figures who study generative models of sentient behavior in living organisms.  (They suggest a future direction for AI development that is very similar to the vision described in the previous MindBlog post, which described a recent article by Venkatesh Rao.) Here are the highlights and abstract of the article.

Highlights

  • Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation.
  • We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems.
  • Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways.
  • Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded.
  • Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively.

Abstract

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

Friday, February 09, 2024

Bodily maps of musical sensations across cultures

Interesting work from Putkinen et al. (open source):  

Significance

Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.

Friday, January 05, 2024

Capturing non-dual reality in language

For my own future reference and for MindBlog readers interested in my previous MindBlog posts on non-duality,  I want to pass on the start of a discussion thread in the Waking Up Community at WakingUp.com written by Rish Magal, London, U.K, on the subject of capturing nondual reality in language. One of the discussants makes reference to the Laukkonen and Slagter article whose ideas were referenced in my recent lecture on New Perspectives on how our Minds Work.  


From Rish Magal,   London, UK

Prompted by a really great discussion in another thread, I thought it would be an interesting experiment to try and express some important nondual insights in English. Comments and responses would be very much appreciated 😀.
Disclaimers:
    1    Despite being a very long post (sorry!), this is only a very very rough outline. I’m cutting lots of corners, and it would take a book to explain and defend all these ideas.
    2    I needed to stretch some of our normal concepts. This is not surprising, since our concepts are integral to a misguided way of looking at the world. I’ve tried to explain the stretching, but I might not have succeeded to your satisfaction!
    3    This is only one attempt to describe nondual reality. I’m not claiming it’s The One True Way. The purpose of the exercise is to explore whether it’s possible to capture the essence of nondual reality. If this version looks at all promising, other versions are surely possible too.


I’ll label each step, in case anyone wants to respond to an individual piece.

A) What we call a “self” is a useful fiction. Humans use it to plan their interactions with each other, to hold each other accountable, to organise their own behaviour … and a host of other purposes.

B) The fictional “self” is created by the human brain, acting in concert with other human brains. From cognitive science comes the idea that the self is part of a predictive model used by the brain (e.g. Anil Seth). From philosophy comes the idea that a self is a ‘Centre of Narrative Gravity’ (Dan Dennett). Perhaps both of these can be accurate together.

C) From psychology, we know that the human brain does not understand itself very well at all. It has very limited understanding of its own reasons for acting (Michael Gazzaniga). In fact it can easily be mistaken about whether it has even acted at all (Daniel Wagner).

D) There is no ‘Free Will’ or responsibility for actions. (Sam Harris makes a strong case on this, but there are several ways to argue for this conclusion.) Causal chains move through human bodies in the same way that they move through billiard balls. There’s no agency in a human brain, much less a “self”.

E) Perhaps the biggest flaw in our language is the idea that objects cause events. We reify objects with nouns, and causing with verbs. It would be more accurate to say: reality is a succession of events. But even that is too dualistic. More accurate still: there is simply one continuously unfolding process. Events inside human brains are like tiny ripples in this huge universe-sized sea.

F) By various means*, a human brain can come to Realise** that its habitual way of looking at the world is badly flawed, and can grasp one or more of the preceding points.

G) Such a realisation is usually accompanied by a huge emotional sense of relief, release, and bliss — especially the first few times.

H) A human brain which doesn’t have access to a structure of revised concepts (such as the one outlined here) will struggle to interpret its own experience. It is likely to attach to the emotional content, and try very hard to recapture that.

I) Over the centuries, many human brains which have glimpsed these truths have struggled and failed to express them in language. (Until very recent advances in science, psychology & philosophy, it would have been almost impossible to fill out a plausible account of what's going on.) Over the centuries, it has become common to say that these insights cannot be expressed in language.

J) My first claim is that nondual insights can be expressed in language. These steps are one attempt to do so.

K) *My second claim is that describing nondual reality in language (however imperfectly) can be extremely helpful in triggering a brain to grasp these insights. And help it return to them reliably.

L) **By "Realise" I mean something beyond (but including) understanding and agreeing with a statement. "Realising" means deeply believing it to the point of feeling in my bones that it is true.

As an analogy: I understand and believe that I'm going to die one day. But in terms of my daily activities it doesn't really feel like it's true, and my behaviour is basically indistinguishable from someone who thought he was immortal 😁. If tomorrow a neurologist shows me a scan of a huge tumour in my brain, I will Realise the truth that I am going to die in an entirely different way, and my behaviour is likely to change radically.

Note that I already have all the concepts I need to understand this truth. I think the shift from simple understanding & cognitive assent, to what I'm calling Realisation, is a much more immediate and tangible kind of belief, with much more emotional content. I now feel it to be true, as well as simply agreeing with the statement intellectually. But the content of the belief is the same, and can be expressed in language and concepts.
 
M) In this brain, the experienced shift from understanding to Realising usually produces symptoms like laughter, releasing, euphoria, connectedness, presence, empathy, equanimity. But after many such experiences, their intensity varies (at least in this brain). Importantly (IMO): these feelings/experiences are not the point of meditation. The point is the insights themselves — to realise/recognise the nondual nature of reality.

N) These statements above are not an attempt to capture the experience of realising/recognising them. There’s definitely a limit to the usefulness of words in conveying an experience. When people say that the experience of realisation cannot be captured in language, I would agree (though I think we can say a few things about it). But that’s not the case I’m making here. My case is that statements about nondual reality can be expressed in language, and doing so can be extremely helpful.

O) This list is not meant to be exhaustive. No doubt there are more statements about nondual reality which can be expressed in language, which this brain hasn’t grasped. If “your” brain knows of some, please post about them! Thankswhich can be expressed in language, which this brain hasn’t grasped. If “your” brain knows of some, please post about them! Thanks

Wednesday, November 29, 2023

Meta-Learned Models of Cognition

I pass on the text of a recent email from Behavioral and Brain Sciences inviting commentary on an article by Binz et al.  I am beginning to plow through the interesting text and figures - and will mention that motivated readers can obtain a PDF of the article from me.

Target Article: Meta-Learned Models of Cognition

Authors: Marcel Binz, Ishita Dasgupta, Akshay K. Jagadish, Matthew Botvinick, Jane X. Wang, and Eric Schulz

Deadline for Commentary Proposals: Wednesday, December 20, 2023

Abstract: Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

Keywords: meta-learning, rational analysis, Bayesian inference, cognitive modeling, neural networks

Monday, November 06, 2023

Visual event segmentation alters higher-level thought.

An interesting piece of work from Ongchoco et al.:  

Significance

Numbers can be unexpectedly powerful. Suppose you must provide the last two digits of your social security number (SSN), after which you are asked how much you are willing to pay for a bottle of wine. Though your SSN digits are irrelevant to your valuation judgment, they nonetheless influence numerical estimates, such that lower SSN digits lead to lower valuations and higher SSN digits, to higher valuations. Such “anchoring” effects are extremely prevalent and powerful. Here, we demonstrate how a feature of low-level visual perception—the spontaneous segmentation of experience at event boundaries, as when you simply walk through a doorway—can restrict or even eliminate anchoring effects across economic valuations, factual questions, and legal judgments.
Abstract
Research on higher-level thought has revealed many principles of reasoning and decision-making but has rarely made contact with how we perceive the world in the first place. Here we show how a lower-level property of perception—the spontaneous and task-irrelevant segmentation of continuous visual stimulation into discrete events—can restrict one of the most notorious biases in decision-making: numerical anchoring. Subjects walked down a long room in an immersive three dimensional (3D) animation and then made a numerical judgment (e.g., of how much a suitcase is worth, or of how many hours of community service a minor crime deserved). Critically, some subjects passed through a doorway (a visual event boundary) during their virtual walk, while others did not—equating time, distance traveled, and visual complexity. The anchoring manipulation was especially innocuous, not appearing to be part of the experiment at all. Before the online trial began, subjects reported the two-digit numerical value from a visually distorted “CAPTCHA” (“to verify that you are human”)—where this task-irrelevant anchor was either low (e.g., 29) or high (e.g., 92). With no doorway, we observed reliable anchoring effects: Higher CAPTCHA values produced higher estimates. With the doorway, however, such effects were attenuated or even eliminated. This generalized across tasks involving item valuations, factual questions, and legal judgments and in tests of both incidental and explicit anchoring. This demonstrates how spontaneous visual event segmentation can have profound consequences for higher-level thought.

Friday, November 03, 2023

How our genes support our illusory selves - the "Baldwin effect"

I have been thinking about giving more explanation of a paragraph from my Oct. 25 post, A Materialist's Credo:
Modern neuroscience has proved that our experienced perceptions of sensing and acting are these predictions. They are fantasies, or illusions, as is our sense of having a self with agency that experiences value, purpose, and meaning. Everything we do and experience is in the service of reducing surprises by fulling these fantasies. An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language.

The sentence "An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language." is invoking what is called "The Baldwin Effect" or Baldwin hypothesis. 

I decided to turn to Chat GPT-4 for some help.

Deric:  

What is the "Baldwin effect" in evolutionary biology, particularly with respect to human evolution?

ChatGPT: 

The "Baldwin effect" describes a mechanism by which learned behaviors can eventually become inherited traits through the process of natural selection. Named after the psychologist James Mark Baldwin, who proposed the idea in the late 19th century, this concept bridges the gap between phenotypic plasticity (the ability of an organism to change its phenotype in response to changes in the environment) and genetic evolution.

Here's a simplified overview of how the Baldwin effect works:
    1    Initial Variation: Within a population, there are variations in behavior. Some individuals can learn a new beneficial behavior in response to a change in the environment, while others cannot.
    2    Learning as a Temporary Solution: The ability to learn this behavior offers a short-term advantage. Those who can learn the new behavior have a better chance of surviving and reproducing.
    3    Genetic Fixation Over Time: Over generations, there can be genetic changes that make this learned behavior easier to acquire or even innate. As those with a predisposition to the behavior have a better survival and reproductive rate, genes that support the behavior become more common.
    4    Reduced Reliance on Learning: As the behavior becomes more genetically rooted, the population may become less reliant on learning the behavior from scratch. Eventually, what was once a learned behavior can manifest itself as an instinctual behavior.

With respect to human evolution, one could theorize the Baldwin effect playing a role in various capacities. For example, our ancestors might have learned specific skills, like tool use or certain social behaviors, that provided survival advantages. Over time, genetic predispositions towards these skills could have developed, leading to humans having innate tendencies or quicker learning capacities in these areas.

However, it's essential to note that proving the Baldwin effect in human evolution is challenging due to the intricacies of genetic and cultural evolution. Still, it offers an exciting perspective on the interplay between learning and genetics.

Deric: 

How might the Baldwin effect hypothesis be applied to the human invention of an illusory "I" or "self" capable of forming social groups, whose cohesion is reinforced by emotional behaviors driven by evolved neuroendocrine mechanisms?

ChatGPT:

The concept of an illusory "I" or "self" is deeply rooted in both philosophical discourses and cognitive science studies. When considered in the context of the Baldwin effect, the evolution of such a self-concept could be framed as an adaptive trait in the intricate web of human social structures. Here's one way to interpret this:   

    1    Initial Variation and Learning: Early human communities might have had members who had stronger self-awareness or more robust personal identities than others. Such individuals might have been better at recognizing their roles within groups, remembering past interactions, and predicting future interactions. This self-awareness might initially emerge as a learned or nurtured cognitive trait.
    2    Social Advantages: Individuals with a pronounced sense of self could have been more adept at forming and maintaining social bonds, making them crucial members of their communities. They might have been better at understanding social hierarchies, reciprocity, and the nuances of cooperation, which are essential for the survival and success of early human societies.
    3    Emotional Behaviors and Neuroendocrine Mechanisms: As these individuals navigated their social landscapes, their brains could have evolved neuroendocrine mechanisms to support the complex emotions related to social interactions—empathy, jealousy, gratitude, shame, etc. These emotions, in turn, would reinforce the cohesion and stability of social groups, ensuring the group's collective survival and success.
    4    Genetic Fixation Over Time: Over generations, the capacity for a strong self-concept and the accompanying emotional depth might become increasingly genetically ingrained. This evolution would mean that newer generations would be born with an innate predisposition towards these traits, reducing the learning curve required to acquire them.
    5    Cultural Evolution: Parallel to the genetic fixation, cultural mechanisms might evolve to further reinforce the idea of the self and its importance in the society—through stories, traditions, rituals, etc. The interplay between genetic predispositions and cultural learning might lead to the establishment of the 'self' as a foundational concept in human societies.

In this context, the Baldwin effect would suggest that our ancestors' learned behavior—forming a concept of self for social advantages—over time, became a genetically supported predisposition. This evolution would mean that the complex cognitive and emotional behaviors supporting a strong self-concept became more instinctual and less reliant on individual learning.

Wednesday, November 01, 2023

David Brooks on "What is a Person?"


David Brooks has an astounding ability to simplify and present important ideas. I pass on a few clips from Chapter 5  - titled "What is a Person?" - of his new book  “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen.”  In chapter 9 he offers an equally lucid presentation of work in the cognitive sciences by Gibson and Proffitt showing how people in different life circumstances literally see different worlds. I've enjoyed reading this book and recommend that you read it. 

As we try to understand other people, we want to be constantly asking ourselves: How are they perceiving this situation? How are they experiencing this moment? How are they constructing their reality?
Let me dip briefly into brain science to try to show you how radical this process of construction is. Let’s take an example as simple as the act of looking around a room. It doesn’t feel like you're creating anything. It feels like you're taking in what’s objectively out there. You open your eyes. Light waves flood in. Your brain records what you see: a chair, a painting, a dust bunny on the floor. It feels like one of those old-fashioned cameras—the shutter opens and light floods in and gets recorded on the film
But this is not how perception really works. Your brain is locked in the dark, bony vault of your skull. Its job is to try to make sense of the world given the very limited amount of information that makes it into your retinas, through the optic nerves, and onto the integrative layer of the visual cortex. Your senses give you a poor-quality, low-resolution snapshot of the world, and your brain is then forced to take that and construct a high-definition, feature-length movie.
To do that, your visual system constructs the world by taking what you already know and applying it to the scene in front of you. Your mind is continually asking itself questions like “What is this similar to?” and “Last time I was in this situation, what did I see next?” Your mind projects out a series of models of what it expects to see. Then the eyes check in to report back about whether they are seeing what the mind expected. In other words, seeing is not a passive process of receiving data; it’s an active process of prediction and correction.
Perception, the neuroscientist Anil Seth writes, is “a generative, creative act.” It is “an action-oriented construction, rather than a passive registration of an objective external reality.” Or as the neuroscientist Lisa Feldman Barrett notes, “Scientific evidence shows that what we see, hear, touch, taste, and smell are largely simulations of the world, not reactions to it.” Most of us non-neuroscientists are not aware of all this constructive activity, because it happens unconsciously, It's as if the brain is composing vast, complex Proustian novels, and to the conscious mind it feels like no work at all
Social psychologists take a wicked delight in exposing the flaws of this prediction-correction way of seeing. They do this by introducing things into a scene that we don’t predict will be there and therefore don’t see. You probably know about the invisible gorilla experiment. Re- searchers present subjects with a video of a group of people moving around passing a basketball and ask the subjects to count the number of passes by the team wearing white. After the video, the researchers ask, “Did you see the gorilla?” Roughly half the research subjects have no idea what the researchers are talking about. But when they view the video a second time, with the concept “gorilla” now in their heads, they are stunned to see that a man in a gorilla suit had strolled right into the circle, stood there for a few seconds, and then walked out. They didn’t see it before because they didn’t predict “gorilla.”
In my favorite experiment of this sort, a researcher asks a student for directions to a particular place on a college campus. The student starts giving directions. Then a couple of “workmen”—actually, two other researchers— rudely carry a door between the directions asker and the directions giver. As the door passes between them, the directions asker surreptitiously trades places with one of the workmen. After the door has passed, the directions giver finds himself giving directions to an entirely different human being. And the majority of these directions givers don’t notice. They just keep on giving directions. We don’t expect one human being to magically turn into another, and therefore we don't see it when it happens.
In 1951 there was a particularly brutal football game between Dartmouth and Princeton. Afterward, fans of both teams were furious because, they felt, the opposing team had been so vicious. When psychologists had students rewatch a film of the game in a calmer setting, the students still fervently believed that the other side had committed twice as many penalties as their own side. When challenged about their biases, both sides pointed to the game film as objective proof that their side was right. As the psychologists researching this phenomenon, Albert Hastorf and Hadley Cantril, put it, “The data here indicate that there is no such ‘thing’as a ‘game’ existing ‘out there’ in its own right which people merely ‘observe’ The ‘game’ ‘exists’ for a person and is experienced by him only insofar as certain things have significances in terms of his purpose.” The students from the different schools constructed two different games depending on what they wanted to see. Or as the psychiatrist Iain McGilchrist puts it, “The model we choose to use to understand something determines what we find.”
Researchers like exposing the flaws in our way of seeing, but I’m constantly amazed at how brilliant the human mind is at constructing a rich, beautiful world. For example, in normal conversation, people often slur and mispronounce words. If you heard each word someone said in isolation, you wouldn't be able to understand 50 percent of them. But because your mind is so good at predicting what words probably should be in what sentence, you can easily create a coherent flow of meaning from other people's talk.
The universe is a drab, silent, colorless place. I mean this quite literally. There is no such thing as color and sound in the universe; it’s just a bunch of waves and particles. But because we have creative minds, we perceive sound and music, tastes and smells, color and beauty, awe and wonder. All that stuff is in here in your mind, not out there in the universe.
I've taken this dip into neuroscience to give the briefest sense of just how much creative artistry every person is performing every second of the day. And if your mind has to do a lot of con- structive workin order for you to see the physical objects in front of you, imagine how much work it has to undertake to construct your identity, your life story, your belief system, your ideals. There are roughly eight billion people on Earth, and each one of them sees the world in their own unique, never-to-be-repeated way.





Monday, October 23, 2023

Architectural experience influences the processing of others’ body expressions

An open source article by Presti et al:  

Significance

The motor system has been recognized as a fundamental neural machinery for spatial and social cognition, making the study of the interplay between architecture and social behavior worthwhile. Here, we tested how a virtual architectural experience alters the subsequent processing of body expressions, showing that the motor system participates at two distinct stages: the earliest influenced by the dynamic architectural experience and the latter modulated by the actual physical characteristics. These findings highlight the existence of an overlapping motor neural substrate devoted to spatial and social cognition, with the architectural space exerting an early and possibly adapting effect on the later social experience. Ultimately, spatial design may impact the processing of human emotions.
Abstract
The interplay between space and cognition is a crucial issue in Neuroscience leading to the development of multiple research fields. However, the relationship between architectural space and the movement of the inhabitants and their interactions has been too often neglected, failing to provide a unifying view of architecture's capacity to modulate social cognition broadly. We bridge this gap by requesting participants to judge avatars’ emotional expression (high vs. low arousal) at the end of their promenade inside high- or low-arousing architectures. Stimuli were presented in virtual reality to ensure a dynamic, naturalistic experience. High-density electroencephalography (EEG) was recorded to assess the neural responses to the avatar’s presentation. Observing highly aroused avatars increased Late Positive Potentials (LPP), in line with previous evidence. Strikingly, 250 ms before the occurrence of the LPP, P200 amplitude increased due to the experience of low-arousing architectures, reflecting an early greater attention during the processing of body expressions. In addition, participants stared longer at the avatar’s head and judged the observed posture as more arousing. Source localization highlighted a contribution of the dorsal premotor cortex to both P200 and LPP. In conclusion, the immersive and dynamic architectural experience modulates human social cognition. In addition, the motor system plays a role in processing architecture and body expressions suggesting that the space and social cognition interplay is rooted in overlapping neural substrates. This study demonstrates that the manipulation of mere architectural space is sufficient to influence human social cognition.

Monday, October 09, 2023

What your brain is doing after the light turns green.

 Gandhi and collaboratores show that if you step out to cross the street without looking right or left the neural activity in the brain is different than if you look from side to side first to be sure no cars are coming. Population level analysis of movement-related transient activity patterns in a population of superior colliculus neurons change in the two different contexts, and this difference is not readily identifiable in single-unit recordings.  Here is their technical abstract:

Sensorimotor transformation is the process of first sensing an object in the environment and then producing a movement in response to that stimulus. For visually guided saccades, neurons in the superior colliculus (SC) emit a burst of spikes to register the appearance of stimulus, and many of the same neurons discharge another burst to initiate the eye movement. We investigated whether the neural signatures of sensation and action in SC depend on context. Spiking activity along the dorsoventral axis was recorded with a laminar probe as Rhesus monkeys generated saccades to the same stimulus location in tasks that require either executive control to delay saccade onset until permission is granted or the production of an immediate response to a target whose onset is predictable. Using dimensionality reduction and discriminability methods, we show that the subspaces occupied during the visual and motor epochs were both distinct within each task and differentiable across tasks. Single-unit analyses, in contrast, show that the movement-related activity of SC neurons was not different between tasks. These results demonstrate that statistical features in neural activity of simultaneously recorded ensembles provide more insight than single neurons. They also indicate that cognitive processes associated with task requirements are multiplexed in SC population activity during both sensation and action and that downstream structures could use this activity to extract context. Additionally, the entire manifolds associated with sensory and motor responses, respectively, may be larger than the subspaces explored within a certain set of experiments.

Wednesday, September 20, 2023

Chemistry that regulates whether we stay with what we're doing or try something new

Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:  

Significance

Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.
Abstract
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.

Wednesday, September 06, 2023

Mapping the physical properties of odorant molecules to their perceptual characteristics.

I pass on parts of the editor's summary and the abstract of a foundational piece of work by Lee et al. that produces a map linking odorant molecular structures to their perceptual experience, analogous to the known maps for vision and hearing that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. I also pass on the first few (slightly edited) paragraphs of the paper that set context. Motivated readers can obtain a PDF of the article from me. (This work does not engage the problem, noted by Sagar et al., that the same volatile molecular may smell different to different people - the same odor can smell ‘fruity’ and ‘floral’ to one person and ‘musky’ and ‘decayed’ to another.)  

Summary

For vision and hearing, there are well-developed maps that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. The sense of olfaction does not yet have such a map. Using a graph neural network, Lee et al. developed a principal odor map (POM) that faithfully represents known perceptual hierarchies and distances. This map outperforms previously published models to the point that replacing a trained human’s responses with the model output would improve overall panel description. The POM coordinates were able to predict odor intensity and perceptual similarity, even though these perceptual features were not explicitly part of the model training.
Abstract
Mapping molecular structure to odor perception is a key challenge in olfaction. We used graph neural networks to generate a principal odor map (POM) that preserves perceptual relationships and enables odor quality prediction for previously uncharacterized odorants. The model was as reliable as a human in describing odor quality: On a prospective validation set of 400 out-of-sample odorants, the model-generated odor profile more closely matched the trained panel mean than did the median panelist. By applying simple, interpretable, theoretically rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
Initial paragraphs of text:
A fundamental problem in neuroscience is mapping the physical properties of a stimulus to perceptual characteristics. In vision, wavelength maps to color; in audition, frequency maps to pitch. By contrast, the mapping from chemical structures to olfactory percepts is poorly understood. Detailed and modality-specific maps such as the Commission Internationale de l’Elcairage (CIE) color space (1) and Fourier space (2) led to a better understanding of visual and auditory coding. Similarly, to better understand olfactory coding, the field of olfaction needs a better map.
Pitch increases monotonically with frequency. By contrast, the relationship between odor percept and odorant structure is riddled with discontinuities...frequently structurally similar pairs are not perceptually similar pairs. These discontinuities in the structure-odor relationship suggest that standard chemoinformatic representations of molecules—functional group counts, physical properties, molecular fingerprints, and so on—that have been used in recent odor modeling work are inadequate to map odor space.
To generate odor-relevant representations of molecules, we constructed a message passing neural network (MPNN), which is a specific type of graph neural network (GNN), to map chemical structures to odor percepts. Each molecule was represented as a graph, with each atom described by its valence, degree, hydrogen count, hybridization, formal charge, and atomic number. Each bond was described by its degree, its aromaticity, and whether it is in a ring. Unlike traditional fingerprinting techniques, which assign equal weight to all molecular fragments within a set bond radius, a GNN can optimize fragment weights for odor-specific applications. Neural networks have unlocked predictive modeling breakthroughs in diverse perceptual domains [e.g., natural images, faces, and sounds] and naturally produce intermediate representations of their input data that are functionally high-dimensional, data-driven maps. We used the final layer of the GNN (henceforth, “our model”) to directly predict odor qualities, and the penultimate layer of the model as a principal odor map (POM). The POM (i) faithfully represented known perceptual hierarchies and distances, (ii) extended to out-of-sample (hereafter, “novel”) odorants, (iii) was robust to discontinuities in structure-odor distances, and (iv) generalized to other olfactory tasks.
We curated a reference dataset of ~5000 molecules, each described by multiple odor labels (e.g., creamy, grassy), by combining the Good Scents and Leffingwell & Associates (GS-LF) flavor and fragrance databases. To train our model, we optimized model parameters with a weighted-cross entropy loss over 150 epochs using Adam with a learning rate decaying from 5 × 10−4 to 1 × 10−5 and a batch size of 128. The GS-LF dataset was split 80/20 training/test, and the 80% training set further subdivided into five cross-validation splits. These cross-validation splits were used to optimize hyperparameters using Vizier, a Bayesian optimization algorithm, by tuning across 1000 trials. Details about model architecture and hyperparameters are given in the supplementary methods. When properly hyperparameter-tuned, performance was found to be robust across many model architectures. We present results for the model with the highest mean area under the receiver operating characteristic curve (AUROC) on the cross-validation set (AUROC = 0.89).

Monday, September 04, 2023

Inhalation boosts perceptual awareness and decision speed

From Ludovic Molle et al. (open source):  

Significance

Breathing is a ubiquitous biological rhythm in animal life. However, little is known about its effect on consciousness and decision-making. Here, we measured the respiratory rhythm of humans performing a near-threshold discrimination experiment. We show that inhalation, compared with exhalation, improves perceptual awareness and accelerates decision-making while leaving accuracy unaffected.
Summary
The emergence of consciousness is one of biology’s biggest mysteries. During the past two decades, a major effort has been made to identify the neural correlates of consciousness, but in comparison, little is known about the physiological mechanisms underlying first-person subjective experience. Attention is considered the gateway of information to consciousness. Recent work suggests that the breathing phase (i.e., inhalation vs. exhalation) modulates attention, in such a way that attention directed toward exteroceptive information would increase during inhalation. One key hypothesis emerging from this work is that inhalation would improve perceptual awareness and near-threshold decision-making. The present study directly tested this hypothesis. We recorded the breathing rhythms of 30 humans performing a near-threshold decision-making task, in which they had to decide whether a liminal Gabor was tilted to the right or the left (objective decision task) and then to rate their perceptual awareness of the Gabor orientation (subjective decision task). In line with our hypothesis, the data revealed that, relative to exhalation, inhalation improves perceptual awareness and speeds up objective decision-making, without impairing accuracy. Overall, the present study builds on timely questions regarding the physiological mechanisms underlying consciousness and shows that breathing shapes the emergence of subjective experience and decision-making.

Monday, August 28, 2023

A shared novelty-seeking basis for creativity and curiosity

I pass on the abstract of a target article having the title of this post, sent to me by Behavioral and Brain Science. I'm reading through it, and would be willing to send a PDF of the article to motivated MindBlog readers who wish to check it out.
Curiosity and creativity are central pillars of human growth and invention. While they have been studied extensively in isolation, the relationship between them has not yet been established. We propose that curiosity and creativity both emanate from the same mechanism of novelty-seeking. We first present a synthesis showing that curiosity and creativity are affected similarly by a number of key cognitive faculties such as memory, cognitive control, attention, and reward. We then review empirical evidence from neuroscience research, indicating that the same brain regions are involved in both curiosity and creativity, focusing on the interplay between three major brain networks: the default-mode network, the salience network, and the executive control network. After substantiating the link between curiosity and creativity, we propose a novelty- seeking model (NSM) that underlies them both and suggest that the manifestation of the NSM is governed by one’s state of mind (SoM).

Monday, July 31, 2023

The visible gorilla.

A staple of my lectures in the 1990s was showing the ‘invisible gorilla’ video, in which viewers were asked to count the number of times that students with white shirts passed a basket ball. After the start of the game a student in a gorilla costume walks slowly through the group, pauses in the middle to wave and moves off screen to the left. Most viewers who are busy counting the ball passes don’t report seeing the gorilla. Here's the video:

 

Wallish et al. now update this experiment on inattentional blindness in an article titled "The visible gorilla: Unexpected fast—not physically salient—Objects are noticeable." Here are their summaries:  

Significance

Inattentional blindness, the inability to notice unexpected objects if attention is focused on a task, is one of the most striking phenomena in cognitive psychology. It is particularly surprising, in light of the research on attentional capture and motion perception, that human observers suffer from this effect even when the unexpected object is moving. Inattentional blindness is commonly interpreted as an inevitable cognitive deficit—the flip side of task focusing. We show that this interpretation is incomplete, as observers can balance the need to focus on task demands with the need to hedge for unexpected but potentially important objects by redeploying attention in response to fast motion. This finding is consistent with the perspective of a fundamentally competent agent who effectively operates in an uncertain world.
Abstract
It is widely believed that observers can fail to notice clearly visible unattended objects, even if they are moving. Here, we created parametric tasks to test this belief and report the results of three high-powered experiments (total n = 4,493) indicating that this effect is strongly modulated by the speed of the unattended object. Specifically, fast—but not slow—objects are readily noticeable, whether they are attended or not. These results suggest that fast motion serves as a potent exogenous cue that overrides task-focused attention, showing that fast speeds, not long exposure duration or physical salience, strongly diminish inattentional blindness effects.

Wednesday, June 21, 2023

Turing, von Neumann, and the computational architecture of biological machines

I pass on the abstract of a PNAS perspective article by Hashim M. Al-Hashimi (motivated readers can obtain a PDF of the article from me).
In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing’s machine, the American–Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.

Monday, June 05, 2023

A simple heuristic for distinguishing lie from truth

Work by Verschuere et al. shows that a simple heuristic of only judging the level of detail in the message consistently allows people to discriminate lies from truths:
Decades of research have shown that people are poor at detecting deception. Understandably, people struggle with integrating the many putative cues to deception into an accurate veracity judgement. Heuristics simplify difficult decisions by ignoring most of the information and relying instead only on the most diagnostic cues. Here we conducted nine studies in which people evaluated honest and deceptive handwritten statements, video transcripts, videotaped interviews or live interviews. Participants performed at the chance level when they made intuitive judgements, free to use any possible cue. But when instructed to rely only on the best available cue (detailedness), they were consistently able to discriminate lies from truths. Our findings challenge the notion that people lack the potential to detect deception. The simplicity and accuracy of the use-the-best heuristic provides a promising new avenue for deception research.

Friday, May 12, 2023

Virality

This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the  seventh section of her book, titled "Virality"

The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our  human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.

The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.

When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
            
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.

The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.

Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”

If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.

If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.”  She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”


Friday, May 05, 2023

The Data Deluge - Dataism

This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the  sixth section of her book, titled "Algorithm."

Chapter 11  

In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.

What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.

These mathematical tools can predict and understand the world more adequately than any theory could.  Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...

Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”

Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
            
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.

Chapter 12

Here are a smaller number of clips from the last section of Chapter 12,  on the errors of algorithms.   

It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.

It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.

For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
            
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
            
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.

This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.