Monday, January 01, 2024

On shifting perspectives....

I pass on clips from a piece in the 12/202/23 Wall Street Journal by Carlo Rovelli, the author, most recently, of ‘ White Holes: Inside the Horizon’

Somnium

By Johannes Kepler (1634)

1 Perhaps the greatest conceptual earthquake in the history of civilization was the Copernican Revolution. Prior to Copernicus, there were two realms: the celestial and the terrestrial. Celestial things orbit, terrestrial ones fall. The former are eternal, the latter perishable. Copernicus proposed a different organization of reality, in which the sun is in a class of its own. In another class are the planets, with the Earth being merely one among many. The moon is in yet another class, all by itself. Everything revolves around the sun, but the moon revolves around the Earth. This mad subversion of conventional reason was taken seriously only after Galileo and Kepler convinced humankind that Copernicus was indeed right. “Somnium” (“The Dream”) is the story of an Icelandic boy—Kepler’s alter ego—his witch mother and a daemon. The daemon takes the mother and son up to the moon to survey the universe, showing explicitly that what they usually see from Earth is the perspective from a moving body. Sheer genius.

History

By Elsa Morante (1974)

2 This passionate and intelligent novel is a fresco of Italy during World War II. “La Storia,” its title in Italian, can be translated as “story” or “tale” as well as “history.” Elsa Morante plumbs the complexity of humankind and its troubles, examining the sufferings caused by war. She writes from the view of the everyday people who bear the burden of the horror. This allows her to avoid taking sides and to see the humanity in both. The subtitle of this masterpiece—“a scandal that has lasted for ten thousand years”— captures Morante’s judgment of war, inviting us to a perspective shift on all wars.

Collected Poems of Lenore Kandel

By Lenore Kandel (2012)

3 Lenore Kandel was a wonderful and underrated poet who was part of the Beat-hippie movement in California. The tone of her poems varies widely, from bliss to desperation: “who finked on the angels / who stole the holy grail and hocked it for a jug of wine?” She created a scandal in the late 1960s by writing about sex in a strong, vivid way. Her profoundly anticonformist voice offers a radical shift of perspective by singing the beauty and the sacredness of female desire.

Why Empires Fall

By Peter Heather and John Rapley (2023)

4 As an Italian, I have long been intrigued by the fall of the Roman Empire. Peter Heather and John Rapley summarize the recent historiographic reassessments of the reasons for the fall. Their work also helps in understanding the present. Empires don’t necessarily collapse because they weaken. They fall because their success brings prosperity to a wider part of the world. They fall if they cannot adjust to the consequent rebalancing of power and if they try to stop history with the sheer power of weapons. “The easiest response to sell to home audiences still schooled in colonial history is confrontation,” the authors write. “This has major, potentially ruinous costs, compared to the more realistic but less immediately popular approach of accepting the inevitability of the periphery’s rise and trying to engage with it.”

The Mūlamadhyamakakārikā

By Nāgārjuna (ca. A.D. 150)

5 This major work of the ancient Indian Buddhist philosopher Nāgārjuna lives on in modern commentaries and translations. Among the best in English is Jay L. Garfield’s “The Fundamental Wisdom of the Middle Way” (1995). Nāgārjuna’s text was repeatedly recommended to me in relation to my work on the interpretation of quantum theory. I resisted, suspicious of facile and often silly juxtapositions between modern science and Eastern philosophy. Then I read it, and it blew my mind. It does indeed offer a possible philosophical underpinning to relational quantum mechanics, which I consider the best way to understand quantum phenomena. But it offers more: a dizzying and captivating philosophical perspective that renounces any foundation. According to this view, the only way to understand something is through its relation with something else—nothing by itself has an independent reality. In the language of Nāgārjuna, every thing, taken by itself, is “empty,” including emptiness itself. I find this a fascinating intellectual perspective as well as a source of serenity, with its acceptance of our limits and impermanence.

 

 

Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”

***************** 

In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.

*****************

It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Monday, December 25, 2023

Large Language Models are not yet providing theories of human language.

 From Dentella et al. (open source):

Significance
The synthetic language generated by recent Large Language Models (LMs) strongly resembles the natural languages of humans. This resemblance has given rise to claims that LMs can serve as the basis of a theory of human language. Given the absence of transparency as to what drives the performance of LMs, the characteristics of their language competence remain vague. Through systematic testing, we demonstrate that LMs perform nearly at chance in some language judgment tasks, while revealing a stark absence of response stability and a bias toward yes-responses. Our results raise the question of how knowledge of language in LMs is engineered to have specific characteristics that are absent from human performance.
Abstract
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess human-like language abilities; hence, they are expected to emulate this behavior by providing both stable and accurate answers, when asked whether a string of words complies with or deviates from their next-word predictions. This work tests whether stability and accuracy are showcased by GPT-3/text-davinci-002, GPT-3/text-davinci-003, and ChatGPT, using a series of judgment tasks that tap on 8 linguistic phenomena: plural attraction, anaphora, center embedding, comparatives, intrusive resumption, negative polarity items, order of adjectives, and order of adverbs. For every phenomenon, 10 sentences (5 grammatical and 5 ungrammatical) are tested, each randomly repeated 10 times, totaling 800 elicited judgments per LM (total n = 2,400). Our results reveal variable above-chance accuracy in the grammatical condition, below-chance accuracy in the ungrammatical condition, a significant instability of answers across phenomena, and a yes-response bias for all the tested LMs. Furthermore, we found no evidence that repetition aids the Models to converge on a processing strategy that culminates in stable answers, either accurate or inaccurate. We demonstrate that the LMs’ performance in identifying (un)grammatical word patterns is in stark contrast to what is observed in humans (n = 80, tested on the same tasks) and argue that adopting LMs as theories of human language is not motivated at their current stage of development.

Friday, December 22, 2023

Three common assumptions about inflammation, aging, and health that are probably wrong

The abstract of a recent PNAS article by Thomas W. McDade (motivated readers can obtain the whole article from me):  

Significance

Inflammation is one of the most important, and potent, physiological systems in the human body. It is widely assumed that levels of inflammation increase with age and that chronic inflammation contributes to cardiovascular diseases. This understanding of inflammation is based on studies of people living in affluent, industrialized settings with low burdens of infectious disease. A broader view, based on research conducted across a wider range of ecological settings globally, indicates that chronic inflammation is not necessarily a “normal” part of aging and that the association between inflammation and age-related diseases is not inevitable. It also suggests that environments early in development have lasting effects on the regulation of inflammation in adulthood, with implications for diseases of aging.
Abstract
Chronic inflammation contributes to the onset and progression of cardiovascular disease and other degenerative diseases of aging. But does it have to? This article considers the associations among inflammation, aging, and health through the lens of human population biology and suggests that chronic inflammation is not a normal nor inevitable component of aging. It is commonly assumed that conclusions drawn from research in affluent, industrialized countries can be applied globally; that aging processes leading to morbidity and mortality begin in middle age; and that inflammation is pathological. These foundational assumptions have shifted focus away from inflammation as a beneficial response to infection or injury and toward an understanding of inflammation as chronic, dysregulated, and dangerous. Findings from community-based studies around the world—many conducted in areas with relatively high burdens of infectious disease—challenge these assumptions by documenting substantial variation in levels of inflammation and patterns of association with disease. They also indicate that nutritional, microbial, and psychosocial environments in infancy and childhood play important roles in shaping inflammatory phenotypes and their contributions to diseases of aging. A comparative, developmental, and ecological approach has the potential to generate novel insights into the regulation of inflammation and how it relates to human health over the life course.

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Monday, December 18, 2023

Traumatic Memories Are Processed as Present Experience

Ellen Barry points to work by Perl et al. showing that different patterns of brain activity underlie sad versus traumatic autobiographical memories.  The Perl et al. abstract:

For people with post-traumatic stress disorder (PTSD), recall of traumatic memories often displays as intrusions that differ profoundly from processing of ‘regular’ negative memories. These mnemonic features fueled theories speculating a unique cognitive state linked with traumatic memories. Yet, to date, little empirical evidence supports this view. Here we examined neural activity of patients with PTSD who were listening to narratives depicting their own memories. An intersubject representational similarity analysis of cross-subject semantic content and neural patterns revealed a differentiation in hippocampal representation by narrative type: semantically similar, sad autobiographical memories elicited similar neural representations across participants. By contrast, within the same individuals, semantically similar trauma memories were not represented similarly. Furthermore, we were able to decode memory type from hippocampal multivoxel patterns. Finally, individual symptom severity modulated semantic representation of the traumatic narratives in the posterior cingulate cortex. Taken together, these findings suggest that traumatic memories are an alternative cognitive entity that deviates from memory per se.

Friday, December 15, 2023

The ebb and flow of physical and cognitive fatigue

Matthews et al. Have investigated moment-to-moment fluctuations in fatigue using behavioral experiments and computational modeling to offer a precise account of how fatigue waxes (during physical and cognitive effort) and wanes (during rest).   From Bijleveld's review of the work:
A key insight from this work is that there are important parallels between physical and cognitive fatigue. Specifically, for both types of fatigue, the best-fitting computational model parsed fatigue into two components: a recoverable component (i.e., the share of fatigue that increases with effort and recovers with rest) and an unrecoverable component (i.e., the share of fatigue that only increases with effort and does not recover with rest, at least not within the ~1-h session). For physical fatigue, this result conceptually replicates a previous study; for cognitive fatigue, this result is new and original. Together, these findings paint a nuanced picture of how both physical and cognitive fatigue emerge over time. From an applied angle, the distinction between recoverable and unrecoverable fatigue is intriguing: it has potential implications for task design, such as in the contexts of education and work...On balance, physical and cognitive fatigue may be more similar than they are different.
Here is the Maltthews et al. abstract:
The willingness to exert effort for reward is essential but comes at the cost of fatigue. Theories suggest fatigue increases after both physical and cognitive exertion, subsequently reducing the motivation to exert effort. Yet a mechanistic understanding of how this happens on a moment-to-moment basis, and whether mechanisms are common to both mental and physical effort, is lacking. In two studies, participants reported momentary (trial-by-trial) ratings of fatigue during an effort-based decision-making task requiring either physical (grip-force) or cognitive (mental arithmetic) effort. Using a novel computational model, we show that fatigue fluctuates from trial-to-trial as a function of exerted effort and predicts subsequent choices. This mechanism was shared across the domains. Selective to the cognitive domain, committing errors also induced momentary increases in feelings of fatigue. These findings provide insight into the computations underlying the influence of effortful exertion on fatigue and motivation, in both physical and cognitive domains.


 

 

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.

 

Monday, December 11, 2023

There are more living cells on Earth than stars in the universe: ~ 10^30 vs. ~ 10^22

These numbers come from a fascinating piece by Dennis Overbye summarizing work by a team of biologists and geologists. Selected clips:

According to the fossil record, geology and evolution have been engaged in a dance for 3.8 billion years, since our planet was only 700 million years old. It was then that the first single-celled creatures appeared, perhaps in undersea volcanic vents, feasting on the chemical energy around them.
The population of cells has been growing exponentially ever since, even through geological disasters and extinction events, which opened up new avenues of evolution.
The seeds for animal life were sown sometime in the dim past when some bacterium learned to use sunlight to split water molecules and produce oxygen and sugar. By 2.4 billion years ago, with photosynthesis well-established, the amount of oxygen in the atmosphere began to rise dramatically. The Great Oxidation Event was clearly the biggest event in the history of the biosphere.
Without photosynthesis, the rest of creation would have little to eat. But it is just one strand in a web of geological feedback loops by which weather, oceans, microbes and volcanoes conspire to keep the globe basically stable and warm and allow life to grow.
The carbonate silicate cycle, for example, regulates the amount of carbon dioxide in the atmosphere; the gas traps heat and keeps the planet temperate and mostly stable. Rain washes carbon dioxide from the air and into the ocean; volcanoes disgorge it again from the underworld. As a result, a trillion gigatons of carbon have been cycled from gas to life and back again over the millenniums. That’s about 100 times as much carbon as exists on Earth, which suggests that, in principle, every atom of carbon has been recycled 100 times.
The rise of cyanobacteria set off what is known as the Cambrian Explosion about 550 million years ago, when multicellular creatures — animals — appeared in sudden splendiferous profusion in the fossil record. We were off to the Darwinian races.
Crockford and his colleagues realized that they could trace the population growth of cells through time by measuring mineral isotopes and the amount of oxygen in old rocks. As a result, they were able to estimate the total life that Earth has produced since its beginning — about 10^40 cells, roughly 10 billion times more than currently exist.
Although this number sounds huge, it represents only 10 percent of all the cells that will come about by the time the curtain falls on life on Earth a billion years from now. As the sun ages, it will brighten, astronomers say, amplifying the weathering and washing away of carbon dioxide. At the same time, as Earth’s interior gradually cools, volcanic activity will subside, cutting off the replenishment of the greenhouse gas.
As a result, Dr. Crockford said, “it is unlikely that Earth’s biosphere will ever grow beyond a time-integrated ∼10^41 cells across the planet’s entire habitable lifetime.”
But for now, Dr. Crockford and his colleagues wrote in their paper, “the extension of today’s relatively high rates of primary productivity will likely squeeze more life into less time.” The more cells there are, the more times they will replicate, producing more mutations, Dr. Crockford explained. We inhabitants of Earth’s biosphere have a billion years’ worth of surprises ahead of us.

Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

 
Google's Bard's response:
 
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.

*********

ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.

 



Sunday, December 03, 2023

A flash of clarity on what current LLMs can and can not do. An AI apocalypse does not appear to be eminent...

In his most recent newsletter, Venkatesh Rao pulls up a twitter thread he wrote in 2017 making what he calls an ontological distinction between  boundary intelligence and interior intelligence.  This was before transformers like GPT-1 began to attract more attention. The distinction Rao makes is central to understanding what current large language models (LLMs) can and can not do. Here is his unedited text from 2017:
 
1. I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence

2. Boundary intelligence is how you deal with information flows across the boundary of your processing abilities 

3. Interior intelligence is how you process information. Includes logic, emotional self-regulation, etc.

4. A thesis I've been converging on is that boundary intelligence is VASTLY more consequential once interior intelligence exceeds a minimum

5. Boundary intelligence is by definition meta, since you're tuning your filters and making choices about what to even let hit your attention

6. I think it is highly consequential because almost all risk management happens via boundary intelligence (blindspots, black swans etc)

7. Interior intelligence is your poker skill and strategy. Boundary intelligence is picking which table to sit down at

8. Interior intelligence is reading a book competently, extracting insights and arguments. Boundary intelligence is picking books to read. 

9. Interior intelligence is being a good listener. Boundary intelligence is deciding whom to listen to. 

10. Basically, better input plus mediocre IQ beats bad input and genius IQ every time, so boundary intelligence is leverage

11. And obviously, boundary intelligence is more sensitive to context. The noisier and angrier info streams get, the more BI beats II

12. Most of boundary intelligence has to do with input intelligence, but output intelligence becomes more important with higher agency 

13. Output intelligence is basically the metacognition around when/where/how/to-whom/why to say or do things you are capable of saying/doing

14. We think a lot about external factors in decisions, but output intelligence is about freedom left after you've dealt with external part

Next, from the abstract of a forthcoming paper by Yadlowsky et al. Rao extracts the following:

…when presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks. Together our results highlight that the impressive ICL abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.

And then, in the following selected clips, continues his text:

Translated into the idiom from the fourteen points above, this translates into “It’s all interior intelligence, just within a very large boundary.” There is no boundary intelligence in current machine learning paradigms. There isn’t even an awareness of boundaries; just the ability to spout statements about doubt, unknowns, and boundaries of knowability; a bit like a blind person discussing color in the abstract.

This is not to say AI cannot acquire BI. In fact, it can do so in a very trivial way, through embodiment. Just add robots around current AIs and let them loose in real environments.

The reason people resist this conclusion is is irrational attachment to interior intelligence as a sacred cow (and among computer science supremacists, a reluctance to acknowledge the relevance and power of embodiment and situatedness in understandings of intelligence). If much of the effectual power of intelligence is attributable to boundary intelligence, there is much less room for sexy theories of interior intelligence. Your (cherished or feared) god-like AI is reduced to learning through FAFO (Fuck around and find out) feedback relationships with the rest of the universe, across its boundary, same as us sadsack meatbag intelligences with our paltry 4-GPU-grade interior intelligence.

In their current (undoubtedly very impressive) incarnation, what we have with AI is 100% II, 0% BI. Human and animal intelligences (and I suspect even plant intelligences, and definitely evolutionary process intelligence) are somewhere between 51-49 to 99.9-0.1% BI. They are dominated to varying degrees by boundary intelligence. Evolutionary processes are 100% BI, 0% II.

Wednesday, November 29, 2023

Meta-Learned Models of Cognition

I pass on the text of a recent email from Behavioral and Brain Sciences inviting commentary on an article by Binz et al.  I am beginning to plow through the interesting text and figures - and will mention that motivated readers can obtain a PDF of the article from me.

Target Article: Meta-Learned Models of Cognition

Authors: Marcel Binz, Ishita Dasgupta, Akshay K. Jagadish, Matthew Botvinick, Jane X. Wang, and Eric Schulz

Deadline for Commentary Proposals: Wednesday, December 20, 2023

Abstract: Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

Keywords: meta-learning, rational analysis, Bayesian inference, cognitive modeling, neural networks

Monday, November 27, 2023

The feasibility of artificial consciousness through the lens of neuroscience

Some interesting perspectives from Aru, Larkum, and Shine in Trends in Neurosciences. Motivated readers can obtain a copy of the article's text from me.  

Highlights

Large language models (LLMs) can produce text that leaves the impression that one may be interacting with a conscious agent.
Present-day LLMs are text-centric, whereas the phenomenological umwelt of living organisms is multifaceted and integrated.
Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing. Currently, such processes are not implemented in LLMs.
The organizational complexity of living systems has no parallel in present-day AI tools. Possibly, AI systems would have to capture this biological complexity to be considered conscious.
LLMs and the current debates on conscious machines provide an opportunity to re-examine some core ideas of the science of consciousness.
Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
 
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
 
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Sunday, November 19, 2023

AI is infusing our daily lives. Is it an existential threat?

As AI is being subtly melded into our daily lives by the Google, Apple, and Microsoft clouds there is continuing debate over whether it is or Isn’t an existential threat. This debate has been brought to a head by the sudden and unexpected firing of Open AI’s chief executive Sam Altman and has starkly exposed the philosophical rift in the AI community between those who are more or less concerned about the potential risks of its untrammeled development. How many, if any, guardrails should be in place?. I have been on the techno-optimists side of this debate and passed on Marc Andreesen's rather extreme and overstated SubStack post titled "Why AI Will Save the World" in a June 23 MindBlog post. I was then completely horrified by his subsequent piece by this ideologist of the Silicon Valley elite titled "Fighting", which took a very positive view of Elon Musk challenging Mark Zuckerbert to a cage fight. I was going to write a piece on his dystopian future world, but two NYTimes pieces have done a better job at it than I could. They are Elizabeth Spiers "A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World" and Ezra Klein's "The Chief Ideologist of the Silicon Valley Elite Has Some Strange Ideas".

Friday, November 10, 2023

Personalized chatbots and wearable AI devices for melding AI with our everyday lives

Personalized GPTs are here, but apparently not yet at the level of consumers like myself who don't want to do coding. I've been looking into making my own personal chatbot whose training data is 50 MB of my personal journal of about 20,000 pages. I would then like to query and chat with the resulting chatbot about events in my life since 1980. If anyone reading this has some advice, let me know!

Below are some URLs to my morning's reading:

These links are to articles from Kevin Roose, NYTimes tech columnist:  

Personalized A.I. Agents Are Here. Is the World Ready for Them?

Kevin Roose Hard Fork podcast (transcript available) on this and other AI topics 

And this article is on AI devices seeking to cut into smartphone dominance 

It points to two such devices: The Humane Ai Pin and The Rewind AI Pendant 

Wednesday, November 08, 2023

Self-Organization: Nature's Intelligent Design

I was to pass on this link to a talk by Clint Sprott, who initiated the Chaos and Complexity Discssion group that I was an active member of before leaving Madison WI. I remain on their email listserve. The slides for the talk can be downloaded here. Talk description:
Clint Sprott, Professor Emeritus, Department of Physics, UW-Madison, explains that although we believe that complex patterns must have a complex cause, patterns may spontaneously arise. This self-organization which occurs in nature can be described with simple computer models that replicate the features of the patterns.

Monday, November 06, 2023

Visual event segmentation alters higher-level thought.

An interesting piece of work from Ongchoco et al.:  

Significance

Numbers can be unexpectedly powerful. Suppose you must provide the last two digits of your social security number (SSN), after which you are asked how much you are willing to pay for a bottle of wine. Though your SSN digits are irrelevant to your valuation judgment, they nonetheless influence numerical estimates, such that lower SSN digits lead to lower valuations and higher SSN digits, to higher valuations. Such “anchoring” effects are extremely prevalent and powerful. Here, we demonstrate how a feature of low-level visual perception—the spontaneous segmentation of experience at event boundaries, as when you simply walk through a doorway—can restrict or even eliminate anchoring effects across economic valuations, factual questions, and legal judgments.
Abstract
Research on higher-level thought has revealed many principles of reasoning and decision-making but has rarely made contact with how we perceive the world in the first place. Here we show how a lower-level property of perception—the spontaneous and task-irrelevant segmentation of continuous visual stimulation into discrete events—can restrict one of the most notorious biases in decision-making: numerical anchoring. Subjects walked down a long room in an immersive three dimensional (3D) animation and then made a numerical judgment (e.g., of how much a suitcase is worth, or of how many hours of community service a minor crime deserved). Critically, some subjects passed through a doorway (a visual event boundary) during their virtual walk, while others did not—equating time, distance traveled, and visual complexity. The anchoring manipulation was especially innocuous, not appearing to be part of the experiment at all. Before the online trial began, subjects reported the two-digit numerical value from a visually distorted “CAPTCHA” (“to verify that you are human”)—where this task-irrelevant anchor was either low (e.g., 29) or high (e.g., 92). With no doorway, we observed reliable anchoring effects: Higher CAPTCHA values produced higher estimates. With the doorway, however, such effects were attenuated or even eliminated. This generalized across tasks involving item valuations, factual questions, and legal judgments and in tests of both incidental and explicit anchoring. This demonstrates how spontaneous visual event segmentation can have profound consequences for higher-level thought.

Friday, November 03, 2023

How our genes support our illusory selves - the "Baldwin effect"

I have been thinking about giving more explanation of a paragraph from my Oct. 25 post, A Materialist's Credo:
Modern neuroscience has proved that our experienced perceptions of sensing and acting are these predictions. They are fantasies, or illusions, as is our sense of having a self with agency that experiences value, purpose, and meaning. Everything we do and experience is in the service of reducing surprises by fulling these fantasies. An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language.

The sentence "An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language." is invoking what is called "The Baldwin Effect" or Baldwin hypothesis. 

I decided to turn to Chat GPT-4 for some help.

Deric:  

What is the "Baldwin effect" in evolutionary biology, particularly with respect to human evolution?

ChatGPT: 

The "Baldwin effect" describes a mechanism by which learned behaviors can eventually become inherited traits through the process of natural selection. Named after the psychologist James Mark Baldwin, who proposed the idea in the late 19th century, this concept bridges the gap between phenotypic plasticity (the ability of an organism to change its phenotype in response to changes in the environment) and genetic evolution.

Here's a simplified overview of how the Baldwin effect works:
    1    Initial Variation: Within a population, there are variations in behavior. Some individuals can learn a new beneficial behavior in response to a change in the environment, while others cannot.
    2    Learning as a Temporary Solution: The ability to learn this behavior offers a short-term advantage. Those who can learn the new behavior have a better chance of surviving and reproducing.
    3    Genetic Fixation Over Time: Over generations, there can be genetic changes that make this learned behavior easier to acquire or even innate. As those with a predisposition to the behavior have a better survival and reproductive rate, genes that support the behavior become more common.
    4    Reduced Reliance on Learning: As the behavior becomes more genetically rooted, the population may become less reliant on learning the behavior from scratch. Eventually, what was once a learned behavior can manifest itself as an instinctual behavior.

With respect to human evolution, one could theorize the Baldwin effect playing a role in various capacities. For example, our ancestors might have learned specific skills, like tool use or certain social behaviors, that provided survival advantages. Over time, genetic predispositions towards these skills could have developed, leading to humans having innate tendencies or quicker learning capacities in these areas.

However, it's essential to note that proving the Baldwin effect in human evolution is challenging due to the intricacies of genetic and cultural evolution. Still, it offers an exciting perspective on the interplay between learning and genetics.

Deric: 

How might the Baldwin effect hypothesis be applied to the human invention of an illusory "I" or "self" capable of forming social groups, whose cohesion is reinforced by emotional behaviors driven by evolved neuroendocrine mechanisms?

ChatGPT:

The concept of an illusory "I" or "self" is deeply rooted in both philosophical discourses and cognitive science studies. When considered in the context of the Baldwin effect, the evolution of such a self-concept could be framed as an adaptive trait in the intricate web of human social structures. Here's one way to interpret this:   

    1    Initial Variation and Learning: Early human communities might have had members who had stronger self-awareness or more robust personal identities than others. Such individuals might have been better at recognizing their roles within groups, remembering past interactions, and predicting future interactions. This self-awareness might initially emerge as a learned or nurtured cognitive trait.
    2    Social Advantages: Individuals with a pronounced sense of self could have been more adept at forming and maintaining social bonds, making them crucial members of their communities. They might have been better at understanding social hierarchies, reciprocity, and the nuances of cooperation, which are essential for the survival and success of early human societies.
    3    Emotional Behaviors and Neuroendocrine Mechanisms: As these individuals navigated their social landscapes, their brains could have evolved neuroendocrine mechanisms to support the complex emotions related to social interactions—empathy, jealousy, gratitude, shame, etc. These emotions, in turn, would reinforce the cohesion and stability of social groups, ensuring the group's collective survival and success.
    4    Genetic Fixation Over Time: Over generations, the capacity for a strong self-concept and the accompanying emotional depth might become increasingly genetically ingrained. This evolution would mean that newer generations would be born with an innate predisposition towards these traits, reducing the learning curve required to acquire them.
    5    Cultural Evolution: Parallel to the genetic fixation, cultural mechanisms might evolve to further reinforce the idea of the self and its importance in the society—through stories, traditions, rituals, etc. The interplay between genetic predispositions and cultural learning might lead to the establishment of the 'self' as a foundational concept in human societies.

In this context, the Baldwin effect would suggest that our ancestors' learned behavior—forming a concept of self for social advantages—over time, became a genetically supported predisposition. This evolution would mean that the complex cognitive and emotional behaviors supporting a strong self-concept became more instinctual and less reliant on individual learning.

Wednesday, November 01, 2023

David Brooks on "What is a Person?"


David Brooks has an astounding ability to simplify and present important ideas. I pass on a few clips from Chapter 5  - titled "What is a Person?" - of his new book  “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen.”  In chapter 9 he offers an equally lucid presentation of work in the cognitive sciences by Gibson and Proffitt showing how people in different life circumstances literally see different worlds. I've enjoyed reading this book and recommend that you read it. 

As we try to understand other people, we want to be constantly asking ourselves: How are they perceiving this situation? How are they experiencing this moment? How are they constructing their reality?
Let me dip briefly into brain science to try to show you how radical this process of construction is. Let’s take an example as simple as the act of looking around a room. It doesn’t feel like you're creating anything. It feels like you're taking in what’s objectively out there. You open your eyes. Light waves flood in. Your brain records what you see: a chair, a painting, a dust bunny on the floor. It feels like one of those old-fashioned cameras—the shutter opens and light floods in and gets recorded on the film
But this is not how perception really works. Your brain is locked in the dark, bony vault of your skull. Its job is to try to make sense of the world given the very limited amount of information that makes it into your retinas, through the optic nerves, and onto the integrative layer of the visual cortex. Your senses give you a poor-quality, low-resolution snapshot of the world, and your brain is then forced to take that and construct a high-definition, feature-length movie.
To do that, your visual system constructs the world by taking what you already know and applying it to the scene in front of you. Your mind is continually asking itself questions like “What is this similar to?” and “Last time I was in this situation, what did I see next?” Your mind projects out a series of models of what it expects to see. Then the eyes check in to report back about whether they are seeing what the mind expected. In other words, seeing is not a passive process of receiving data; it’s an active process of prediction and correction.
Perception, the neuroscientist Anil Seth writes, is “a generative, creative act.” It is “an action-oriented construction, rather than a passive registration of an objective external reality.” Or as the neuroscientist Lisa Feldman Barrett notes, “Scientific evidence shows that what we see, hear, touch, taste, and smell are largely simulations of the world, not reactions to it.” Most of us non-neuroscientists are not aware of all this constructive activity, because it happens unconsciously, It's as if the brain is composing vast, complex Proustian novels, and to the conscious mind it feels like no work at all
Social psychologists take a wicked delight in exposing the flaws of this prediction-correction way of seeing. They do this by introducing things into a scene that we don’t predict will be there and therefore don’t see. You probably know about the invisible gorilla experiment. Re- searchers present subjects with a video of a group of people moving around passing a basketball and ask the subjects to count the number of passes by the team wearing white. After the video, the researchers ask, “Did you see the gorilla?” Roughly half the research subjects have no idea what the researchers are talking about. But when they view the video a second time, with the concept “gorilla” now in their heads, they are stunned to see that a man in a gorilla suit had strolled right into the circle, stood there for a few seconds, and then walked out. They didn’t see it before because they didn’t predict “gorilla.”
In my favorite experiment of this sort, a researcher asks a student for directions to a particular place on a college campus. The student starts giving directions. Then a couple of “workmen”—actually, two other researchers— rudely carry a door between the directions asker and the directions giver. As the door passes between them, the directions asker surreptitiously trades places with one of the workmen. After the door has passed, the directions giver finds himself giving directions to an entirely different human being. And the majority of these directions givers don’t notice. They just keep on giving directions. We don’t expect one human being to magically turn into another, and therefore we don't see it when it happens.
In 1951 there was a particularly brutal football game between Dartmouth and Princeton. Afterward, fans of both teams were furious because, they felt, the opposing team had been so vicious. When psychologists had students rewatch a film of the game in a calmer setting, the students still fervently believed that the other side had committed twice as many penalties as their own side. When challenged about their biases, both sides pointed to the game film as objective proof that their side was right. As the psychologists researching this phenomenon, Albert Hastorf and Hadley Cantril, put it, “The data here indicate that there is no such ‘thing’as a ‘game’ existing ‘out there’ in its own right which people merely ‘observe’ The ‘game’ ‘exists’ for a person and is experienced by him only insofar as certain things have significances in terms of his purpose.” The students from the different schools constructed two different games depending on what they wanted to see. Or as the psychiatrist Iain McGilchrist puts it, “The model we choose to use to understand something determines what we find.”
Researchers like exposing the flaws in our way of seeing, but I’m constantly amazed at how brilliant the human mind is at constructing a rich, beautiful world. For example, in normal conversation, people often slur and mispronounce words. If you heard each word someone said in isolation, you wouldn't be able to understand 50 percent of them. But because your mind is so good at predicting what words probably should be in what sentence, you can easily create a coherent flow of meaning from other people's talk.
The universe is a drab, silent, colorless place. I mean this quite literally. There is no such thing as color and sound in the universe; it’s just a bunch of waves and particles. But because we have creative minds, we perceive sound and music, tastes and smells, color and beauty, awe and wonder. All that stuff is in here in your mind, not out there in the universe.
I've taken this dip into neuroscience to give the briefest sense of just how much creative artistry every person is performing every second of the day. And if your mind has to do a lot of con- structive workin order for you to see the physical objects in front of you, imagine how much work it has to undertake to construct your identity, your life story, your belief system, your ideals. There are roughly eight billion people on Earth, and each one of them sees the world in their own unique, never-to-be-repeated way.