Wednesday, September 06, 2023

Mapping the physical properties of odorant molecules to their perceptual characteristics.

I pass on parts of the editor's summary and the abstract of a foundational piece of work by Lee et al. that produces a map linking odorant molecular structures to their perceptual experience, analogous to the known maps for vision and hearing that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. I also pass on the first few (slightly edited) paragraphs of the paper that set context. Motivated readers can obtain a PDF of the article from me. (This work does not engage the problem, noted by Sagar et al., that the same volatile molecular may smell different to different people - the same odor can smell ‘fruity’ and ‘floral’ to one person and ‘musky’ and ‘decayed’ to another.)  

Summary

For vision and hearing, there are well-developed maps that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. The sense of olfaction does not yet have such a map. Using a graph neural network, Lee et al. developed a principal odor map (POM) that faithfully represents known perceptual hierarchies and distances. This map outperforms previously published models to the point that replacing a trained human’s responses with the model output would improve overall panel description. The POM coordinates were able to predict odor intensity and perceptual similarity, even though these perceptual features were not explicitly part of the model training.
Abstract
Mapping molecular structure to odor perception is a key challenge in olfaction. We used graph neural networks to generate a principal odor map (POM) that preserves perceptual relationships and enables odor quality prediction for previously uncharacterized odorants. The model was as reliable as a human in describing odor quality: On a prospective validation set of 400 out-of-sample odorants, the model-generated odor profile more closely matched the trained panel mean than did the median panelist. By applying simple, interpretable, theoretically rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
Initial paragraphs of text:
A fundamental problem in neuroscience is mapping the physical properties of a stimulus to perceptual characteristics. In vision, wavelength maps to color; in audition, frequency maps to pitch. By contrast, the mapping from chemical structures to olfactory percepts is poorly understood. Detailed and modality-specific maps such as the Commission Internationale de l’Elcairage (CIE) color space (1) and Fourier space (2) led to a better understanding of visual and auditory coding. Similarly, to better understand olfactory coding, the field of olfaction needs a better map.
Pitch increases monotonically with frequency. By contrast, the relationship between odor percept and odorant structure is riddled with discontinuities...frequently structurally similar pairs are not perceptually similar pairs. These discontinuities in the structure-odor relationship suggest that standard chemoinformatic representations of molecules—functional group counts, physical properties, molecular fingerprints, and so on—that have been used in recent odor modeling work are inadequate to map odor space.
To generate odor-relevant representations of molecules, we constructed a message passing neural network (MPNN), which is a specific type of graph neural network (GNN), to map chemical structures to odor percepts. Each molecule was represented as a graph, with each atom described by its valence, degree, hydrogen count, hybridization, formal charge, and atomic number. Each bond was described by its degree, its aromaticity, and whether it is in a ring. Unlike traditional fingerprinting techniques, which assign equal weight to all molecular fragments within a set bond radius, a GNN can optimize fragment weights for odor-specific applications. Neural networks have unlocked predictive modeling breakthroughs in diverse perceptual domains [e.g., natural images, faces, and sounds] and naturally produce intermediate representations of their input data that are functionally high-dimensional, data-driven maps. We used the final layer of the GNN (henceforth, “our model”) to directly predict odor qualities, and the penultimate layer of the model as a principal odor map (POM). The POM (i) faithfully represented known perceptual hierarchies and distances, (ii) extended to out-of-sample (hereafter, “novel”) odorants, (iii) was robust to discontinuities in structure-odor distances, and (iv) generalized to other olfactory tasks.
We curated a reference dataset of ~5000 molecules, each described by multiple odor labels (e.g., creamy, grassy), by combining the Good Scents and Leffingwell & Associates (GS-LF) flavor and fragrance databases. To train our model, we optimized model parameters with a weighted-cross entropy loss over 150 epochs using Adam with a learning rate decaying from 5 × 10−4 to 1 × 10−5 and a batch size of 128. The GS-LF dataset was split 80/20 training/test, and the 80% training set further subdivided into five cross-validation splits. These cross-validation splits were used to optimize hyperparameters using Vizier, a Bayesian optimization algorithm, by tuning across 1000 trials. Details about model architecture and hyperparameters are given in the supplementary methods. When properly hyperparameter-tuned, performance was found to be robust across many model architectures. We present results for the model with the highest mean area under the receiver operating characteristic curve (AUROC) on the cross-validation set (AUROC = 0.89).

Monday, September 04, 2023

Inhalation boosts perceptual awareness and decision speed

From Ludovic Molle et al. (open source):  

Significance

Breathing is a ubiquitous biological rhythm in animal life. However, little is known about its effect on consciousness and decision-making. Here, we measured the respiratory rhythm of humans performing a near-threshold discrimination experiment. We show that inhalation, compared with exhalation, improves perceptual awareness and accelerates decision-making while leaving accuracy unaffected.
Summary
The emergence of consciousness is one of biology’s biggest mysteries. During the past two decades, a major effort has been made to identify the neural correlates of consciousness, but in comparison, little is known about the physiological mechanisms underlying first-person subjective experience. Attention is considered the gateway of information to consciousness. Recent work suggests that the breathing phase (i.e., inhalation vs. exhalation) modulates attention, in such a way that attention directed toward exteroceptive information would increase during inhalation. One key hypothesis emerging from this work is that inhalation would improve perceptual awareness and near-threshold decision-making. The present study directly tested this hypothesis. We recorded the breathing rhythms of 30 humans performing a near-threshold decision-making task, in which they had to decide whether a liminal Gabor was tilted to the right or the left (objective decision task) and then to rate their perceptual awareness of the Gabor orientation (subjective decision task). In line with our hypothesis, the data revealed that, relative to exhalation, inhalation improves perceptual awareness and speeds up objective decision-making, without impairing accuracy. Overall, the present study builds on timely questions regarding the physiological mechanisms underlying consciousness and shows that breathing shapes the emergence of subjective experience and decision-making.

Friday, September 01, 2023

The fragility of artists’ reputations from 1795 to 2020

Zhang et al. do an interesting study using natural language processing to measure reputation over time:  

Significance

This study uses machine-learning techniques and a historical corpus to examine the evolution of artists’ reputations over time. Contrary to popular wisdom, we find that most artists’ reputations peak just before their death, and then start to decline. This decline is strongest for artists who were most popular during their lifetime. We show that artists’ reduced visibility and changes in the public’s aesthetic taste explain much of the posthumous reputation decline. This study highlights how social perception of historical figures can shift and emphasizes the vulnerability of human reputation. Methodologically, the study illustrates an application of natural language processing to measure reputation over time.
Abstract
This study explores the longevity of artistic reputation. We empirically examine whether artists are more- or less-venerated after their death. We construct a massive historical corpus spanning 1795 to 2020 and build separate word-embedding models for each five-year period to examine how the reputations of over 3,300 famous artists—including painters, architects, composers, musicians, and writers—evolve after their death. We find that most artists gain their highest reputation right before their death, after which it declines, losing nearly one SD every century. This posthumous decline applies to artists in all domains, includes those who died young or unexpectedly, and contradicts the popular view that artists’ reputations endure. Contrary to the Matthew effect, the reputational decline is the steepest for those who had the highest reputations while alive. Two mechanisms—artists’ reduced visibility and the public’s changing taste—are associated with much of the posthumous reputational decline. This study underscores the fragility of human reputation and shows how the collective memory of artists unfolds over time.

Wednesday, August 30, 2023

Neuron–astrocyte networks might perform the core computations performed by AI transformer blocks

Fascinating ideas from Kozachkov et al. Their text contains primers on Astrocyte biology and the transformers found in AI Generative Pre-trained Transformers such as ChatGPT.  

Significance

Transformers have become the default choice of neural architecture for many machine learning applications. Their success across multiple domains such as language, vision, and speech raises the question: How can one build Transformers using biological computational units? At the same time, in the glial community, there is gradually accumulating evidence that astrocytes, formerly believed to be passive house-keeping cells in the brain, in fact play an important role in the brain’s information processing and computation. In this work we hypothesize that neuron–astrocyte networks can naturally implement the core computation performed by the Transformer block in AI. The omnipresence of astrocytes in almost any brain area may explain the success of Transformers across a diverse set of information domains and computational tasks.
Abstract
Glial cells account for between 50% and 90% of all human brain cells, and serve a variety of important developmental, structural, and metabolic functions. Recent experimental efforts suggest that astrocytes, a type of glial cell, are also directly involved in core cognitive processes such as learning and memory. While it is well established that astrocytes and neurons are connected to one another in feedback loops across many timescales and spatial scales, there is a gap in understanding the computational role of neuron–astrocyte interactions. To help bridge this gap, we draw on recent advances in AI and astrocyte imaging technology. In particular, we show that neuron–astrocyte networks can naturally perform the core computation of a Transformer, a particularly successful type of AI architecture. In doing so, we provide a concrete, normative, and experimentally testable account of neuron–astrocyte communication. Because Transformers are so successful across a wide variety of task domains, such as language, vision, and audition, our analysis may help explain the ubiquity, flexibility, and power of the brain’s neuron–astrocyte networks.

Monday, August 28, 2023

A shared novelty-seeking basis for creativity and curiosity

I pass on the abstract of a target article having the title of this post, sent to me by Behavioral and Brain Science. I'm reading through it, and would be willing to send a PDF of the article to motivated MindBlog readers who wish to check it out.
Curiosity and creativity are central pillars of human growth and invention. While they have been studied extensively in isolation, the relationship between them has not yet been established. We propose that curiosity and creativity both emanate from the same mechanism of novelty-seeking. We first present a synthesis showing that curiosity and creativity are affected similarly by a number of key cognitive faculties such as memory, cognitive control, attention, and reward. We then review empirical evidence from neuroscience research, indicating that the same brain regions are involved in both curiosity and creativity, focusing on the interplay between three major brain networks: the default-mode network, the salience network, and the executive control network. After substantiating the link between curiosity and creativity, we propose a novelty- seeking model (NSM) that underlies them both and suggest that the manifestation of the NSM is governed by one’s state of mind (SoM).

Friday, August 25, 2023

The promise and pitfalls of the metaverse for science

A curious open-sourse bit of hand waving and gibble-gabble about the metaverse. I pass on the first two paragraphs and links to its references.
Some technology companies and media have anointed the metaverse as the future of the internet. Advances in virtual reality devices and high-speed connections, combined with the acceptance of remote work during the COVID-19 pandemic, have brought considerable attention to the metaverse as more than a mere curiosity for gaming. Despite substantial investments and ambitiously optimistic pronouncements, the future of the metaverse remains uncertain: its definitions and boundaries alternate among dystopian visions, a mixture of technologies (for example, Web3 and blockchain) and entertainment playgrounds.
As a better-defined and more-coherent realization of the metaverse continues to evolve, scientists have already started bringing their laboratories to 3D virtual spaces, running experiments with virtual reality and augmenting knowledge by using immersive representations. We consider how scientists can flexibly and responsibly leverage the metaverse, prepare for its uncertain future and avoid some of its pitfalls.

Wednesday, August 23, 2023

Why citizens vote away the democracies they claim to cherish.

Here is an interesting bit of research from Braley et al. reported in Nature Human Behaviour. Their abstract:
Around the world, citizens are voting away the democracies they claim to cherish. Here we present evidence that this behaviour is driven in part by the belief that their opponents will undermine democracy first. In an observational study (N = 1,973), we find that US partisans are willing to subvert democratic norms to the extent that they believe opposing partisans are willing to do the same. In experimental studies (N = 2,543, N = 1,848), we revealed to partisans that their opponents are more committed to democratic norms than they think. As a result, the partisans became more committed to upholding democratic norms themselves and less willing to vote for candidates who break these norms. These findings suggest that aspiring autocrats may instigate democratic backsliding by accusing their opponents of subverting democracy and that we can foster democratic stability by informing partisans about the other side’s commitment to democracy.

Monday, August 21, 2023

Never-Ending Stories - a survival tactic for uncertain times

I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading.  I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival.  I post the clips here for my own future use, also thinking it might interest some MindBlog readers:

Never-Ending Stories

Marching beat-by-beat into a Purposeless infinite horizon

During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:

Take it one day at a time

Take it one step at a time

Sleep on it; morning is wiser than evening

Count to ten

Or even just breathe

All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.

These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.

There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.

So why is this common advice? And is it good advice?

I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.

Protocol Narratives, Narrative Protocols

Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:

A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.

A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.

Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.

This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.

Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.

Protocols and Purpose

In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes. Let’s distinguish two kinds of motive force in any narrative:

1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.

2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.

In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.

What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.

There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”

The Protocol-Purpose Tradeoff

During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.

The question remains though. When are such assumptions valid?

This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.

Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.

As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.

In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.

This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here. 

To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).

How would you use it?

This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.

The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.

*****

A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”

I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.

I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.

So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.

***********


Step-by-Step is Hard-to-Kill

While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.

The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.

The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.

I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.

Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”

Protocol narratives only need functions to keep going.

They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.

While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).

During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.

I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.

This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.

To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.

Thursday, August 17, 2023

Born Rich

I want to pass on a few slightly edited clips from an interesting essay in The Dispatch by conservative writer Kevin Williamson that a friend pointed me to. And then I pass on the comment on Williamson's ideas offered by another friend: "Wow, this one’s a big gulp of the Kool-Aid. This thesis is patently untrue. As the one percent continues to grow in our current corporate low-tax, constantly crippled regulated business environment, the fallacy of this perspective grows along with it. This is exactly the thinking that book I recommended discusses (Oreskes and Conway: "How American business taught us to loathe government and love the free market.") Our democracy as it is currently functioning is not a Great Leveler.
One of the most distasteful aspects of our politics is the extent to which it is so obviously driven by envy, which is what 99 percent of that “privileged elite” talk ends up being about. But I suppose I am the wrong person to complain about that, because I was born rich, but I don't mean rich in the usual money sense.
Our intellectual and political life is dominated by a relatively narrow class of what we might call intellectually tall people, high-IQ people with diverse socioeconomic backgrounds. And while a great many of them believe that inherited wealth is profoundly unfair, very few of them have any similar thoughts to share about the social role of inherited intelligence.
One of the hardest things to drill into the noggins of the American ruling class (and let’s not pretend that there isn’t one, even if it isn’t exactly what you might expect) is that there is no more merit in being born with certain economically valuable intellectual talents than there is in being born tall, or with curly hair—or white, for that matter. Inherited wealth is an enormous factor in the lives of a relatively small number of Americans and a more modest one in the lives of a larger number, but inherited brainpower is the unearned asset that matters most. We live in a very competitive, very connected world, one with very, very efficient labor markets...We have pretty effective tools (including standardized testing) that are very useful for reaching far, wide, and deep into the population to identify intellectual high-fliers and to direct them into educational and career paths that will give them the chance to make the most out of their lives. There probably is no better place in the world to be born poor and smart—but there is no more merit in being born smart than there is blame in being born poor.
The American “meritocracy” is based to a considerable extent on the generally unspoken proposition that intelligence is merit, and that smart people deserve their success in a special way. Our country is run by smart people, and the smart people in charge very much want to believe that they are where they are because of merit, because of the exemplary lives they have led, not because of some unearned hereditary trait that is the intellectual equivalent of a trust fund. The 1994 book "The Bell Curve" was an attempt to explore the paradox of the hereditary “meritocracy” in a serious way, and it was shouted down by—this was not coincidental—the class of people whose self-conception as a meritorious elite was most directly threatened by the authors’ hypothesis.
Understanding the privileges that go along with inherited intellectual ability as being in a moral sense very much like the privileges that go along with inherited wealth (or an inherited social-racial position or whatever privilege you like) opens up a radical and disruptive perspective on American public life—and draws attention to social situations that, even if understood to be unfair because of the role of hereditary advantage, are not open to resolution through redistributive taxes or affirmative action or anything like that. We aren’t going to mandate that half of the brain surgeons or theoretical physicists have below-average IQs.
Being a conservative, I believe that a healthy society necessarily contains a great deal of organic, authentic diversity. Being a realist, I also believe that this diversity comes with hierarchy. As Russell Kirk observed:
Conservatives pay attention to the principle of variety. They feel affection for the proliferating intricacy of long-established social institutions and modes of life, as distinguished from the narrowing uniformity and deadening egalitarianism of radical systems. For the preservation of a healthy diversity in any civilization, there must survive orders and classes, differences in material condition, and many sorts of inequality. The only true forms of equality are equality at the Last Judgment and equality before a just court of law; all other attempts at levelling must lead, at best, to social stagnation. Society requires honest and able leadership; and if natural and institutional differences are destroyed, presently some tyrant or host of squalid oligarchs will create new forms of inequality.

Tuesday, August 15, 2023

Human History gets a rewrite.

I want to point to two articles I have enjoyed reading, both describing the recent book by Graeber and Wengrowa; “The Dawn of Everything: A New History of Humanity.” The review by Deresiewicz is in The Atlantic Magazine, and The New Yorker Review " is by Lewis-Krause. Some clips from Deresiewicz:
The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally...The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).
It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on. Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring.
Is “civilization” worth it, the authors want to know, if civilization—ancient Egypt, the Aztecs, imperial Rome, the modern regime of bureaucratic capitalism enforced by state violence—means the loss of what they see as our three basic freedoms: the freedom to disobey, the freedom to go somewhere else, and the freedom to create new social arrangements? Or does civilization rather mean “mutual aid, social co-operation, civic activism, hospitality [and] simply caring for others”?
These are questions that Graeber, a committed anarchist—an exponent not of anarchy but of anarchism, the idea that people can get along perfectly well without governments—asked throughout his career. The Dawn of Everything is framed by an account of what the authors call the “indigenous critique.” In a remarkable chapter, they describe the encounter between early French arrivals in North America, primarily Jesuit missionaries, and a series of Native intellectuals—individuals who had inherited a long tradition of political conflict and debate and who had thought deeply and spoke incisively on such matters as “generosity, sociability, material wealth, crime, punishment and liberty.”
The Indigenous critique, as articulated by these figures in conversation with their French interlocutors, amounted to a wholesale condemnation of French—and, by extension, European—society: its incessant competition, its paucity of kindness and mutual care, its religious dogmatism and irrationalism, and most of all, its horrific inequality and lack of freedom. The authors persuasively argue that Indigenous ideas, carried back and publicized in Europe, went on to inspire the Enlightenment (the ideals of freedom, equality, and democracy, they note, had theretofore been all but absent from the Western philosophical tradition). They go further, making the case that the conventional account of human history as a saga of material progress was developed in reaction to the Indigenous critique in order to salvage the honor of the West. We’re richer, went the logic, so we’re better. The authors ask us to rethink what better might actually mean.

Friday, August 11, 2023

The immaturity of America's therapeutic culture

I  recommend reading the most recent NYTimes David Brook's piece, and pass on here ChatGPT 4's response to my request to summarize the main  its 1,472 words. It produced the following  339 words which nicely cover Brook's core points.  I think I will start using ChatGPT 4 more frequently for this purpose since  I've only recently realized that MindBlog readers who do not subscribe to the NYTimes can not read articles that I point to.  Also, I simply don't have the time to generate summaries myself, because I want to be working on other things.  

Summary of the Essay on the American Therapeutic Culture and Maturity: 

The decline of the American psyche can be linked to cultural shifts that started after World War II, leading to the rise of the therapeutic culture.

Historically, self-worth was derived from one's relationship with God or success in the marketplace. In the therapeutic culture, self-worth is gauged by subjective feelings about oneself.

This culture turned many into fragile narcissists, detached from moral traditions and relying heavily on public affirmation for self-worth.

By 2010, the US faced a mental health crisis, marked by rising rates of depression and suicide. Social media became a platform for seeking validation but often resulted in rejection.

The term “trauma” began to encompass a wider range of upsetting experiences, moving from severe physical harm to milder psychological wounds. Trauma became a dominant theme in culture, with many using it as a primary source of identity.

Safetyism emerged, advocating for protections against emotional harm. This led to concepts like “trigger warnings,” “microaggressions,” and “safe spaces.” 

The approach to trauma became counterproductive, with certain false ideas propagated, such as "what doesn't kill you makes you weaker" and the portrayal of traumatized individuals as passive victims.

An opposing perspective, led by figures like Jordan Peterson, argued for resilience and not viewing oneself as a perpetual victim. Yet, these figures too sometimes portrayed themselves as victims in a different context.

This societal focus on victimhood has led to a public culture that is impulsive and erratic. The loudest voices often dominate discussions, sidelining mature discourse.

The core issue is the therapeutic ethos itself, which disconnects people from a broader moral order and asks individuals to create an identity solely from within. 

True maturity is not self-centered but involves weaving an identity from commitments and relationships with others. It's about seeing situations from multiple perspectives and committing to concrete moral goals.

Maturity allows individuals to be calm and understanding in challenging situations, and this might be the answer to building a more resilient and connected culture in the future.

Wednesday, August 09, 2023

What is intelligence?

I want to share a message sent by Clint Sprott to the Chaos and Complexity seminar group at the University of Wisconsin which he started many years ago - it is currently engaged in a discussion of intelligence:

I asked ChatGPT to define intelligence and whether it is intelligent:

User
How would you define intelligence?
ChatGPT

Defining intelligence is a complex and multifaceted task, as it encompasses a wide range of abilities and capacities exhibited by living organisms and artificial systems. Intelligence can be broadly described as the ability to acquire and apply knowledge, learn from experiences, reason, solve problems, adapt to new situations, and exhibit complex cognitive functions.

Intelligence is not limited to a single dimension but rather exists along a spectrum, manifesting in various forms and levels across different species and individuals. It includes both cognitive abilities, such as logical reasoning, memory, language, and spatial awareness, as well as emotional intelligence, social skills, creativity, and adaptability.

It's essential to recognize that intelligence is not solely determined by traditional measures like IQ tests. Intelligence is a dynamic and context-dependent trait that can manifest differently in different environments and cultural contexts. Moreover, intelligence can also be seen in non-biological entities, such as artificial intelligence, where it refers to the ability of machines to perform tasks that typically require human intelligence.

Overall, intelligence is a complex and evolving concept, and our understanding of it continues to evolve as we explore and study the diverse manifestations of intelligence in both living beings and intelligent systems.

User
Are you intelligent?
ChatGPT

As an AI language model, I don't possess consciousness, self-awareness, or subjective experiences, which are crucial components of human intelligence. However, I have been trained on vast amounts of data and have been designed to perform specific tasks, such as understanding and generating human-like text based on the patterns in the data I was trained on.

While AI models like me can process information quickly, analyze patterns, and provide useful responses, it's essential to understand that this is a form of artificial intelligence, and it differs significantly from human intelligence in terms of awareness, emotions, and consciousness.

In essence, I am a tool created by humans to assist with various tasks, but I do not possess genuine intelligence or consciousness as humans do. My responses are based solely on patterns in the data I've been trained on and the algorithms that govern my functioning.
 

Monday, August 07, 2023

America's legacy of poverty - The injustice of place.

Having moved back into the house in which I grew up in Austin Texas, and watching the city inexorably move towards becoming a dystopian metroplex, the analysis of Edin, Schaefer and Nelson makes me feel like returning to the Midwest, where I spent most of my adult life as a professor at the University of Wisconsin in Madison. They paint the most clear and focused explanation for the roots of inequality in America that I have read, concluding that the upper Midwest is the best place to live in America. Here is the concluding paragraph of their piece in the The Atlantic, titled “What the Best Places in America Have in Common.” It is a summary of the message of their new book “The Injustice of Place: Uncovering the legacy of poverty in America.”
The lesson is that people seem to thrive—not always in high salaries but in health and life chances—when inequality is low; when landownership is widespread; when social connection is high; and when corruption and violence are rare. The social leveling that is characteristic of communities in the upper Midwest is more than just a quaint cultural feature. It is the foundation of a community’s well-being. Until these regions’ virtues are shared nationwide, poverty and disadvantage will continue to haunt America.

Friday, August 04, 2023

18-month old humans discriminate moral violations from disobedient or unexpected events

Fascinating studies from Kassecker et al. (open source) have used multiple methods (eye-tracking, observations of expressive behaviors) to probe the developmental origins of human moral cognition by assessing infants’ ability to differentiate between prototypical harmful (moral) and harmless (conventional) violations:
Humans reason and care about ethical issues, such as avoiding unnecessary harm. But what enables us to develop a moral capacity? This question dates back at least to ancient Greece and typically results in the traditional opposition between sentimentalism (the view that morality is mainly driven by socioaffective processes) and rationalism [the view that morality is mainly driven by (socio)cognitive processes or reason]. Here, we used multiple methods (eye-tracking and observations of expressive behaviors) to assess the role of both cognitive and socioaffective processes in infants’ developing morality. We capitalized on the distinction between moral (e.g., harmful) and conventional (e.g., harmless) transgressions to investigate whether 18-mo-old infants understand actions as distinctively moral as opposed to merely disobedient or unexpected. All infants watched the same social scene, but based on prior verbal interactions, an actor’s tearing apart of a picture (an act not intrinsically harmful) with a tool constituted either a conventional (wrong tool), a moral (producing harm), or no violation (correct tool). Infants’ anticipatory looks differentiated between conventional and no violation conditions, suggesting that they processed the verbal interactions and built corresponding expectations. Importantly, infants showed a larger increase in pupil size (physiological arousal), and more expressions indicating empathic concern, in response to a moral than to a conventional violation. Thus, infants differentiated between harmful and harmless transgressions based solely on prior verbal interactions. Together, these convergent findings suggest that human infants’ moral development is fostered by both sociocognitive (inferring harm) and socioaffective processes (empathic concern for others’ welfare).

Wednesday, August 02, 2023

Work as war...and Pritzker's Formulae To Spot An Idiot

Too fascinating pieces on how in the areas of politics and business our more recently evolved primate capabilities for empathy and compassion are being subverted by the more primitive instinctual drives regulating fear and combat. (This post is another example of using MindBlog as an archive of some bonbons I have enjoyed, entering them so that I easily look them up later, and also on the odd chance that a few readers might find them interesting.) 

Work as War. We’re in the Era of the ‘Top Gun’ C.E.O. 

and, 

Pritzker's Formulae To Spot An Idiot 

 

 

Monday, July 31, 2023

The visible gorilla.

A staple of my lectures in the 1990s was showing the ‘invisible gorilla’ video, in which viewers were asked to count the number of times that students with white shirts passed a basket ball. After the start of the game a student in a gorilla costume walks slowly through the group, pauses in the middle to wave and moves off screen to the left. Most viewers who are busy counting the ball passes don’t report seeing the gorilla. Here's the video:

 

Wallish et al. now update this experiment on inattentional blindness in an article titled "The visible gorilla: Unexpected fast—not physically salient—Objects are noticeable." Here are their summaries:  

Significance

Inattentional blindness, the inability to notice unexpected objects if attention is focused on a task, is one of the most striking phenomena in cognitive psychology. It is particularly surprising, in light of the research on attentional capture and motion perception, that human observers suffer from this effect even when the unexpected object is moving. Inattentional blindness is commonly interpreted as an inevitable cognitive deficit—the flip side of task focusing. We show that this interpretation is incomplete, as observers can balance the need to focus on task demands with the need to hedge for unexpected but potentially important objects by redeploying attention in response to fast motion. This finding is consistent with the perspective of a fundamentally competent agent who effectively operates in an uncertain world.
Abstract
It is widely believed that observers can fail to notice clearly visible unattended objects, even if they are moving. Here, we created parametric tasks to test this belief and report the results of three high-powered experiments (total n = 4,493) indicating that this effect is strongly modulated by the speed of the unattended object. Specifically, fast—but not slow—objects are readily noticeable, whether they are attended or not. These results suggest that fast motion serves as a potent exogenous cue that overrides task-focused attention, showing that fast speeds, not long exposure duration or physical salience, strongly diminish inattentional blindness effects.

Friday, July 28, 2023

Unnarratability -The Tower of Babel redux - where have all the common narratives gone?

I pass on some clips from Venkatesh Rao's recent Ribbonfarm Studio posting.. Perspectives like his make me feel that one's most effective self preservation stance might be to assume that we are on the dawn of a new dark age, a period during which only power matters, and community, cooperation, and kindness are diminished - a period like the early middle ages in Europe which did permit under the sheltered circumstances of the church a privileged few to a life of contemplation.    

Strongly Narratable Conditions

The 1985-2015 period, arguably, was strongly narratable, and unsurprisingly witnessed the appearance of many strong global grand narratives. These mostly hewed to the logic of the there-is-no-alternative (TINA) platform narrative of neoliberalism, even when opposed to it...From Francis Fukuyama and Thomas Friedman in the early years, to Thomas Piketty, Yuval Noah Harari, and David Graeber in the final years, many could, and did, peddle coherent (if not always compelling) Big Histories. Narrative performance venues like TED flourished. The TINA platform narrative supplied the worldwinds for all narratives.
Weakly Narratable Conditions
The 2007-2020 period, arguably, was such a period (the long overlap of 8 years, 2007-15, was a period with uneven weak/strong narratability). In such conditions, a situation is messed-up and contentious, but in a way that lends itself to the flourishing of a pluralist, polycentric narrative landscape, where there are multiple contending accounts of a shared situation, Rashomon style, but the situation is merely ambiguous, not incoherent.
While weakly narratable conditions lack platform narratives, you could argue that there is something of a prevailing narrative protocol during weakly narratable times - an emergent lawful pattern of narrative conflict that cannot be codified into a legible set of consensus rules of narrative engagement, but produces slow noisy progress anyway, does not devolve into confused chaos, and sustains a learnable narrative literacy.
This is what it meant to be “very online” in 2007-20. It meant you had acquired a certain literacy around the prevailing narrative protocol. Perhaps nobody could make sense of what was going on overall, beyond their private, solipsistic account of events, and it was perhaps not possible to play to win, but there was enough coherence in the situation that you could at least play to not lose.
Unnarratable Conditions
The pandemic hit, and we got to what I think of as unnarratable conditions...While the specific story of the pandemic itself was narratable, the story of the wider post-Weirding world, thrown into tumult by the pandemic, was essentially unnarratable.
Unnarratable times are fundamentally incoherent melees of contending historical forces. Times when there isn’t even a narrative protocol you can acquire a reliable literacy in, let alone a platform narrative upon which to rest your sense-making efforts. Where the environmental entropy is so high, people struggle to put together any kind of narrative, even solipsistic private ones that harbor no ambitions of inDuencing others. There is no privileged class (comparable to the “Very Online” before 2020) that can plausibly claim a greater narrative literacy than other classes.
Those who claim to possess satisfying grand narratives are barely able to persuade even close allies to share it, let alone induce narrative protocols through them, or install them as platform narratives. The result: a collective retreat to a warren of cozy cultural redoubts, usually governed by comforting reactionary or nostalgic local narratives, and a derelict public discourse.
We have been in such a condition at least since 2022, and arguably since 2020. If you set aside the narrow liminal story of the pandemic, the world has been nearly unnarratable for years now.

Wednesday, July 26, 2023

A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4

This post is using MindBlog to archive a further example example of the sort shown in Monday’s post ...of GPT bots editing  a paragraph I wrote, this one being on non-dual awareness.  The paragraph:

The fact that  there is no there there means it. Empty is empty. The ground state is just the mute body doing its thing. From its depths storylines rise and fall, bonbons to savor or ignore. The body can know that it has found a new ground state, and can be no longer willing to just play nice if the upstairs word salad is dinking around with or threatening its peace. It can withdraw from the fray. It can rest as a calm non-dual self centered within a cosmic perspective of time and space that views the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.

After incorporating some changes suggested by Google Bard and GPT4:

When we say something is empty, we mean just that: it contains nothing. Our natural baseline or 'ground state' is the body performing its automatic functions without our conscious intervention. From the body's inherent depths, various thoughts or experiences emerge and recede, like bonbons we can savor or ignore. The body can grow to know that it has found a new ground state in the present moment, a ground state of awareness that no longer needs to be caught up in word salads of disruptive thoughts that can threaten its peace.  It can withdraw from the fray and can rest as a calm non-dual state of being centered within a cosmic perspective of time and space that can view the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.


Monday, July 24, 2023

The evolution of transhuman forms - a MindBlog paragraph edited by GPT implementations

In this post I am documenting the final result of passing a paragraph I wrote on future human evolution through a number of different GPT (generative pre-trained transformers). This is to archive the results of this exercise for myself, and in case it might be of interest to a few MindBlog readers. The bots were prompted by a request to re-write or edit the paragraph to make it more accessible to a reader with a college education.

The bottom line is that the edited versions with the most depth and coherence were obtained using Open AI’s ChatGPT-4.  Bard.google.com, Google Workspace Labs, and Microsoft’s Bing, respectively, provided increasingly less useful versions.   Here is my original paragraph:

The linguistic capabilities of generative pre-trained transformers based on silicon  - the same capabilities that have enabled human civilization  - will soon exceed those of humans. We have brought our understanding of what human beings as they now exist are about to a plateau level at which further increments of understanding will become increasingly small. The cream has been skimmed, the best cherries have been picked, and it seems appropriate to turn our  attention now to studying the evolution of trans human forms utilizing LLMs that are better able to manage themselves and the planet.  The action is in the the pretraining P of GPT, as the models move beyond their initial human management to the stage of evolving themselves, hopefully into benign versions of Darwin’s  “forms most beautiful and most wonderful.” What is the point of humans other than being self replicating machines that evolve into further forms, hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Can they evolve into trans human minds (or mind, singular)?
Here is the result of including some of the edits suggested by ChatGPT4’s regenerated response, a  vers. 3 of the paragraph above:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon surpass those of humans.

Our understanding of human nature as it currently exists has reached a plateau level, at which further increments of understanding are becoming increasingly small. The cream has been skimmed, the best cherries have been picked. Consequently it seems the right time to shift our focus toward studying the evolution of 'transhuman' forms. These are beings that could potentially leverage advanced language models (LLMs) to better self-regulate themselves and manage the planet.

The real crux of the matter lies in the pretraining phase of these transformers (symbolized by 'P' in GPT). This is when the models are initially developed and fine-tuned. As they evolve beyond their human programming the optimistic vision is that they might transform into benign versions of what Charles Darwin called “forms most beautiful and most wonderful.”

From a philosophical perspective, one might ask: What is the ultimate purpose of human beings, if not to serve as self-replicating entities that evolve into more advanced forms - hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Is it possible for humans to evolve into a collective of transhuman minds or even a singular, transcendent mind?  These are questions worth exploring as we stand on the brink of an AI-enhanced future.


Wednesday, July 19, 2023

Proxy Failure is an Inherent Risk in Goal-Oriented Systems

I will pass on the title and abstract of another article to appear in Behavioral and Brain Science for which reviewers comments are being solicited. MindBlog readers can email me to request a PDF of the target article. 

Dead rats, dopamine, performance metrics, and peacock tails: proxy failure is an inherent risk in goal- oriented systems 

Authors: Yohan J. John, Leigh Caldwell, Dakota E. McCoy, and Oliver Braganza 

Abstract: When a measure becomes a target, it ceases to be a good measure. For example, when standardized test scores in education become targets, teachers may start 'teaching to the test', leading to breakdown of the relationship between the measure--test performance--and the underlying goal--quality education. Similar phenomena have been named and described across a broad range of contexts, such as economics, academia, machine-learning, and ecology. Yet it remains unclear whether these phenomena bear only superficial similarities, or if they derive from some fundamental unifying mechanism. Here, we propose such a unifying mechanism, which we label proxy failure. We first review illustrative examples and their labels, such as the 'Cobra effect', 'Goodhart's law', and 'Campbell's law'. Second, we identify central prerequisites and constraints of proxy failure, noting that it is often only a partial failure or divergence. We argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises which tends to make the proxy a worse approximation of the goal. Third, we develop this perspective for three concrete contexts, namely neuroscience, economics and ecology, highlighting similarities and differences. Fourth, we outline consequences of proxy failure, suggesting it is key to understanding the structure and evolution of goal-oriented systems. Our account draws on a broad range of disciplines, but we can only scratch the surface within each. We thus hope the present account elicits a collaborative enterprise, entailing both critical discussion as well as extensions in contexts we have missed.

Monday, July 17, 2023

MindBlog's reading list.

I've decided to pass on links to articles I have found worthwhile reading , realizing that I am not going to have time to frame their ideas into longer posts because I'm speading more time now at my Steinway B's keyboard than at my computer's keyboard. If you encounter a paywall with any of the links, you might try entering the URL at https://archive.is/.

An installment of Venkatesh Rao’s newsletter: The permaweird narrative 

Jaron Lanier “There is no A.I.” in The New Yorker  

Human Beings Are Soon Going to Be Eclipsed’ David Brooks in The New York Times commenting on Douglas Hofstadter's  recent ideas.  

Marc Andreessen offers a horrific commentary titled "Fighting" on Elon Musk challenging Mark Zuckerberg to a cage fight.  

Learning from history. Archeological evidence that early hierarchical or authoritarian cultures didn't persist as long as more cooperative eqalitarian ones.  

Arthur Brooks on "The illusion of explanatory depth", an installment in his series "How to build a life.""  

Potential anti-aging therapy.  One sample of the effusive outpouring of new ideas and widgets offered by New Atlas.

 

 

 

Friday, July 14, 2023

‘Adversarial’ search for neural basis of consciousness yields first results

Finkel does a summary of the first round of results of an 'adversarial colloboration' funded by the Templeton World Charity Foundation in which
...both sides of the consciousness debate agreed on experiments to be conducted by “theory-neutral” labs with no stake in the outcome. It pits integrated information theory (IIT), the sensory network hypothesis that proposes a posterior “hot zone” as the site of consciousness, against the global neuronal workspace theory (GNWT), which likens networks of neurons in the front of the brain to a clipboard where sensory signals, thoughts, and memories combine before being broadcast across the brain.
The results corroborate IIT’s overall claim that posterior cortical areas are sufficient for consciousness, and neither the involvement of [the prefrontal cortex] nor global broadcasting are necessary,”
The article describes how the debate continues, with advocates of the prefrontal view suggesting this first experimental round had limitations, and that further experiments will support the role of the prefrontal cortex.

Wednesday, July 12, 2023

The True Threat of Artificial Intelligence

I would recommend having a read through Evgeny Morozov's piece in the NYTimes as an antidote to Marc Adreessen's optimistic vision of AI that was the subject of MindBlog's June 23 post. Here is a small clip from the article, followed by the titles describing different problem areas he sees:
Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization...This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.
They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.
But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.
Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.
It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.
Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).
These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.
A.G.I. will never overcome the market’s demands for profit.
A.G.I. will dull the pain of our thorniest problems without fixing them.
A.G.I. undermines civic virtues and amplifies trends we already dislike.
Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Monday, July 10, 2023

Inheritance of social status - stability in England from 1600 to 2022.

From historical records Clarks demonstrates (open source) strong persistence of social status across family trees over 400 years, in spite of large increases in general levels of education and social mobility.  

Significance

There is widespread belief across the social sciences in the ability of social interventions and social institutions to significantly influence rates of social mobility. In England, 1600 to 2022, we see considerable change in social institutions across time. Half the population was illiterate in 1,800, and not until 1,880 was compulsory primary education introduced. Progressively after this, educational provision and other social supports for poorer families expanded greatly. The paper shows, however, that these interventions did not change in any measurable way the strong familial persistence of social status across generations.
Abstract
A lineage of 422,374 English people (1600 to 2022) contains correlations in social outcomes among relatives as distant as 4th cousins. These correlations show striking patterns. The first is the strong persistence of social status across family trees. Correlations decline by a factor of only 0.79 across each generation. Even fourth cousins, with a common ancestor only five generations earlier, show significant status correlations. The second remarkable feature is that the decline in correlation with genetic distance in the lineage is unchanged from 1600 to 2022. Vast social changes in England between 1600 and 2022 would have been expected to increase social mobility. Yet people in 2022 remain correlated in outcomes with their lineage relatives in exactly the same way as in preindustrial England. The third surprising feature is that the correlations parallel those of a simple model of additive genetic determination of status, with a genetic correlation in marriage of 0.57.

Friday, July 07, 2023

A meta-analysis questions the cognitive benefits of physical activity.

I give up. If anything was supposed to have been proven I would have thought it would be that exercise has a beneficial effect on brain health and cognition. Now Ciria et al. offer the following in Nature Human Biology:
Extensive research links regular physical exercise to an overall enhancement of cognitive function across the lifespan. Here we assess the causal evidence supporting this relationship in the healthy population, using an umbrella review of meta-analyses limited to randomized controlled trials (RCTs). Despite most of the 24 reviewed meta-analyses reporting a positive overall effect, our assessment reveals evidence of low statistical power in the primary RCTs, selective inclusion of studies, publication bias and large variation in combinations of pre-processing and analytic decisions. In addition, our meta-analysis of all the primary RCTs included in the revised meta-analyses shows small exercise-related benefits (d = 0.22, 95% confidence interval 0.16 to 0.28) that became substantially smaller after accounting for key moderators (that is, active control and baseline differences; d = 0.13, 95% confidence interval 0.07 to 0.20), and negligible after correcting for publication bias (d = 0.05, 95% confidence interval −0.09 to 0.14). These findings suggest caution in claims and recommendations linking regular physical exercise to cognitive benefits in the healthy human population until more reliable causal evidence accumulates.
I can not offer an informed opinion on this abstract because my usual access to journals through the University of Wisconsin library does not work with Nature Human Behavior. However, I can point you to an excellent commentary by Claudia Lopez Lloreda that discusses the meta-analysis done by Ciria et al. and gives a summary of several recent studies on exercise and brain health.

Wednesday, July 05, 2023

Why music training slows cognitive aging

A team of Chinese collaborators has reported experiments in the Oxford academic journal Cerebral Cortex titled "Functional gradients in prefrontal regions and somatomotor networks reflect the effect of music training experience on cognitive aging" which are stated to show that music training enhances the functional separation between regions across prefrontal and somatomotor networks, delaying deterioration in working memory performance and prefrontal suppression of prominant but irrelevant information. I'm passing on the abstract and a clip from the paper's conclusion, and can send interested readers the whole article. I think it is an important article but I find it is rendered almost unintelligble by Chinese to English translation issues. I'm surprised the journal let this article appear without further editing.
Studies showed that the top-down control of the prefrontal cortex (PFC) on sensory/motor cortices changes during cognitive aging. Although music training has demonstrated efficacy on cognitive aging, its brain mechanism is still far from clear. Current music intervention studies have paid insufficient attention to the relationship between PFC and sensory regions. Functional gradient provides a new perspective that allows researchers to understand network spatial relationships, which helps study the mechanism of music training that affects cognitive aging. In this work, we estimated the functional gradients in four groups, young musicians, young control, older musicians, and older control. We found that cognitive aging leads to gradient compression. Compared with young subjects, older subjects presented lower and higher principal gradient scores in the right dorsal and medial prefrontal and the bilateral somatomotor regions, respectively. Meanwhile, by comparing older control and musicians, we found a mitigating effect of music training on gradient compression. Furthermore, we revealed that the connectivity transitions between prefrontal and somatomotor regions at short functional distances are a potential mechanism for music to intervene in cognitive aging. This work contributes to understanding the neuroplasticity of music training on cognitive aging.
From the conclusion paragraph:
In a nutshell, we demonstrate the top-down control of prefrontal regions to the somatomotor network, which is associated with inhibitory function and represents a potential marker of cognitive aging, and reveal that music training may work by affecting the connectivity between the two regions. Although this work has investigated the neuroplasticity of music on cognitive aging by recruiting subjects of different age spans, the present study did not include the study of longitudinal changes of the same group. Further studies should include longitudinal follow-up of the same groups over time to more accurately evaluate the effect of music intervention on the process of cognitive aging.

Monday, July 03, 2023

What Babies Know from zero to 1 year - core systems of knowledge

The journal Behavioral and Brain Sciences has sent out to reviewers the précis of a book, "What Babies Know" by Elizabeth S. Spelke, Harvard Psychology Dept. The abstract of her précis:
Where does human knowledge begin? Research on human infants, children, adults, and non- human animals, using diverse methods from the cognitive, brain, and computational sciences, provides evidence for six early emerging, domain-specific systems of core knowledge. These automatic, unconscious systems are situated between perceptual systems and systems of explicit concepts and beliefs. They emerge early in infancy, guide children’s learning, and function throughout life.
Spelke lists domain-specific core systems that are ancient, emerge early in life, and are invariant over later development. These deal with vision, objects, places, number, core knowledge, agents, social cognition, and language. Figures in the précis illustrate basic experiments characterizing the core systems. Motivated readers can obtain a PDF of the precis by emailing me.

Friday, June 30, 2023

Managing the risks of AI in pragmatic ways.

I want to pass on the final paragraphs of a recent commentary by Venkatesh Rao on the tragedy of the Titan submersible, which was a consequence of Stockton Rush, the CEO of OceanGate Expeditions, taking a number of design risks to reduce costs and increase profits. The bulk of Rao's piece deals with issues in the design of potentially dangerous new technologies, and the final paragraphs deal with managing the risks of artificial intelligence in pragmatic ways.
...AI risk, understood as something very similar to ordinary kinds of engineering risk (such as the risk of submersibles imploding), is an important matter, but lurid theological conceptions of AI risk and “alignment” are a not-even-wrong basis for managing it. The Titan affair, as an object lesson in traditional risk-management, offers many good lessons for how to manage real AI risks in pragmatic ways.
But there’s another point, a novel one, that is present in the case of AI that I don’t think has ever been present in technological leaps of the past.
AI is different from other technologies in that it alters the felt balance between knowledge and incomprehension that shapes our individual and collective risk-taking in the world.
AIs are already very good at embodying knowledge, and better at explaining many complex matters than most humans. But they are not yet very good at embodying doubt and incomprehension. They structurally lack epistemic humility and the ability to act on a consciousness of ignorance in justifiably arbitrary ways (ie on the basis of untheorized conservative decision principles backed by a track record). This is something bureaucratic standards bodies do very well. It is something that “software bureaucracies” (such as RLFH — reinforcement learning with human feedback) don’t do very well at all. The much demonized (by the entrepreneurial class) risk-aversion of bureaucrats is also a kind of ex-officio epistemic humility that is an essential ingredient of technology ecosystems.
On the flip side, AI itself is currently a technology full of incomprehensibilities. We understand the low-level mechanics of graph weights, gradient descents, backpropagation, and matrix multiplications. We do not understand how that low-level machinery produces the emergent outcomes it does. Our incomprehensions about AI are comparable to our incomprehensions about our own minds. This makes them extremely well-suited (impedance matched) to being bolted onto our minds as cognitive prosthetics that feel very comfortable, increase our confidence about what we think we know, and turn into extensions of ourselves (this is not exactly surprising, given that they are trained on human-generated data).
As with submersibles, we are at an alchemical state of understanding with AIs, but because of the nature of the technology itself, we might develop a prosthetic overconfidence in our state of indirect knowledge about the world, via AI.
AI might turn all humans who use it into Stockton Rushes.
The risk that AIs might destroy us is in science-fictional ways is overblown, but the risk that they might tempt us into generalized epistemic overconfidence, and systematically blind us to our incomprehensions, leading us to hurt ourselves in complex ways, is probably not sufficiently recognized.
Already, masses of programmers are relying on AIs like Github Copilot, and acting with a level of confidence in generated code that is likely not justified. AI-augmented programmers, even if sober and cautious as unaugmented individuals, might be taking Stockton-Rush type risks due to the false confidence induced by their tools. I don’t know that this is true, but the reports I see about people being 10x more productive and taking pleasure in programming again strike me as warning signs. I suspect there might be premature aestheticization going on here.
And I suspect it will take a few AI-powered Titan-like tragedies for us to wise-up and do something about it.
One way to think about this risk is by analogy to WMDs. Most people think nuclear weapons when they hear the phrase, but perhaps the most destructive WMD in the world is cheap and highly effective small arms, which have made conflicts far deadlier in the last century, and killed way more humans in aggregate than nuclear weapons.
You do not need to worry about a single AI going “AGI” and bring God-like catastrophes of malice or indifference down upon us. We lack the “nuclear science” to make that sort of thing happen. But you do need to be worried about millions of ordinary humans, drawn into drunken overconfidence by AI tools, wreaking the kind of havoc small arms do.

Wednesday, June 28, 2023

Mechanisms that link psychological stress to the exacerbation of gut inflammation.

Schneider et al. describe one mechanism by which psychological stress deteriorates our health  -  the enteric nervous system relays psychological stress to intestinal inflammation. Here is their abstract:  

Highlights

• Psychological stress leads to monocyte-mediated exacerbation of gut inflammation
• Chronic glucocorticoid signaling drives the effect of stress on IBD
• Stress induces inflammatory enteric glia that promote monocyte recruitment via CSF1
• Stress provokes transcriptional immaturity in enteric neurons and dysmotility
Summary 
Mental health profoundly impacts inflammatory responses in the body. This is particularlyapparent in inflammatory bowel disease (IBD), in which psychological stress is associated with exacerbated disease flares. Here, we discover a critical role for the enteric nervous system (ENS) in mediating the aggravating effect of chronic stress on intestinal inflammation. We find that chronically elevated levels of glucocorticoids drive the generation of an inflammatory subset of enteric glia that promotes monocyte- and TNF-mediated inflammation via CSF1. Additionally, glucocorticoids cause transcriptional immaturity in enteric neurons, acetylcholine deficiency, and dysmotility via TGF-β2. We verify the connection between the psychological state, intestinal inflammation, and dysmotility in three cohorts of IBD patients. Together, these findings offer a mechanistic explanation for the impact of the brain on peripheral inflammation, define the ENS as a relay between psychological stress and gut inflammation, and suggest that stress management could serve as a valuable component of IBD care.

Monday, June 26, 2023

The vagus nerve, heart rate variability, and subjective wellbeing - a MindBlog self experiment

In this post I pass on to MindBlog readers a NYTimes article by Christina Caron that has been republished several time by the newspaper. It is a sane account of what the vagus nerve is and what it does...The vagus is the main nerve of the parasympathetic nervous system. Unlike the sympathetic nervous system, which is associated with arousal of the body and the “fight or flight” response, the parasympathetic branch helps us rest, digest and calm down. Numerous experiments have shown that increased activity of the nerve correlates with an improvement in mood. from the article (slightly edited):
The activity of the vagus nerve is difficult to measure directly, especially given how complex it is. But because some vagus nerve fibers connect with the heart, experts can indirectly measure cardiac vagal tone — or the way in which your nervous system regulates your heart — by looking at your heart rate variability (HRV), which is the fluctuations in the amount of time between your heartbeats...An abnormal vagal tone — one in which there is very little HRV — has been associated with conditions like diabetes, heart failure and hypertension...A high HRV may signify an ideal vagal tone. The typical range of HRV is between 20 and 200 msec.

I will give my own experience...I have been using an Oura Ring since December 2021, and more recently an Apple watch,  to monitor nighttime resting heart rate, HRV, body temperature, and respiratory rate. By now I have documented numerous instances of a correlation - occurring over a period of several months - between subjective well being, average nighttime HRV, and duration of deep (restorative) sleep. (See the plot below showing HRV and duration of deep sleep over the past several months).  During periods of stress my average nighttime HRV decreases to ~20 msec and remains relatively constant throughout sleep, during periods when I am feeling open, chilled out, and flexible average nighttime HRV has increased to ~100 msec with large variations during the night. I've also played with techniques meant to tweak parasympathetic/sympathetic balance and found that delivering mild shocks to the body by perturbing breathing or using biofeedback to enhance HRV can correlate with increased average nighttime HRV and daytime sense of well being. Even though I take myself to be an unbiased observer and don't think that I am just feeling what I would like to feel - less stressed and more chilled out - it is important note the usual caveat that any human reports might be biased by a placebo effect. [BUT...see added note below]

Screen shot from the Oura Ring web interface:

NOTE ADDED 11/12/23:   The correlation shown has to taken with a grain of salt, because the correlation coefficient dropped to ~0.1 for the next three month period, and remains there as of 11/12/23