Friday, September 29, 2023

AI, a boon for science and a disaster for creatives

The Sept. 16 issue of the Economist has two excellent articles: How artificial intelligence can revolutionise science and How scientists are using artificial intelligence. I pass on here some edited clips from the first of these articles. I also want to point to much less benign commentary on how AI is moving toward threatening the livelihoods of creators of music, art, and literature: The Internet Is About to Get Much Worse.  

Could AI turbocharge scientific progress and lead to a golden age of discovery?

Some believe that AI can turbocharge scientific progress and lead to a golden age of discovery...Such claims provide a useful counterbalance to fears about large-scale unemployment and killer robots.
Many previous technologies have been falsely hailed as panaceas. The electric telegraph was lauded in the 1850s as a herald of world peace, as were aircraft in the 1900s; pundits in the 1990s said the internet would reduce inequality and eradicate nationalism...but there have been several periods in history when new approaches and new tools did indeed help bring about bursts of world-changing scientific discovery and innovation.
In the 17th century microscopes and telescopes opened up new vistas of discovery and encouraged researchers to favour their own observations over the received wisdom of antiquity, while the introduction of scientific journals gave them new ways to share and publicise their findings. The result was rapid progress in astronomy, physics and other fields, and new inventions from the pendulum clock to the steam engine—the prime mover of the Industrial Revolution.
Then, starting in the late 19th century, the establishment of research laboratories, which brought together ideas, people and materials on an industrial scale, gave rise to further innovations such as artificial fertiliser, pharmaceuticals and the transistor, the building block of the computer..the journal and the laboratory went further still: they altered scientific practice itself and unlocked more powerful means of making discoveries, by allowing people and ideas to mingle in new ways and on a larger scale. AI, too, has the potential to set off such a transformation.
Two areas in particular look promising. The first is “literature-based discovery” (LBD), which involves analysing existing scientific literature, using ChatGPT-style language analysis, to look for new hypotheses, connections or ideas that humans may have missed. LBD is showing promise in identifying new experiments to try—and even suggesting potential research collaborators.
The second area is “robot scientists”, also known as “self-driving labs”. These are robotic systems that use AI to form new hypotheses, based on analysis of existing data and literature, and then test those hypotheses by performing hundreds or thousands of experiments, in fields including systems biology and materials science. Unlike human scientists, robots are less attached to previous results, less driven by bias—and, crucially, easy to replicate.
In 1665, during a period of rapid scientific progress, Robert Hooke, an English polymath, described the advent of new scientific instruments such as the microscope and telescope as “the adding of artificial organs to the natural”. They let researchers explore previously inaccessible realms and discover things in new ways, “with prodigious benefit to all sorts of useful knowledge”. For Hooke’s modern-day successors, the adding of artificial intelligence to the scientific toolkit is poised to do the same in the coming years—with similarly world-changing results.

Wednesday, September 27, 2023

Memory for stimulus sequences unique to humans?

Continuing the never-ending quest to find fundamental human abilities or behaviors lacking in other animals (many, like the mirror recognition test, have failed), Lind et al. find that bonobo chimpanzees fail to remember the order of two stimuli even after 2,000 trials. I pass on their abstract below. They suggest the ability to remember sequences may be an ability that sets humans apart from other animals. Hmmmmm, maybe the bonobos just don't think the task is important? They should try the test on ravens and crows that have shown amazing smarts...
Identifying cognitive capacities underlying the human evolutionary transition is challenging, and many hypotheses exist for what makes humans capable of, for example, producing and understanding language, preparing meals, and having culture on a grand scale. Instead of describing processes whereby information is processed, recent studies have suggested that there are key differences between humans and other animals in how information is recognized and remembered. Such constraints may act as a bottleneck for subsequent information processing and behavior, proving important for understanding differences between humans and other animals. We briefly discuss different sequential aspects of cognition and behavior and the importance of distinguishing between simultaneous and sequential input, and conclude that explicit tests on non-human great apes have been lacking. Here, we test the memory for stimulus sequences-hypothesis by carrying out three tests on bonobos and one test on humans. Our results show that bonobos’ general working memory decays rapidly and that they fail to learn the difference between the order of two stimuli even after more than 2,000 trials, corroborating earlier findings in other animals. However, as expected, humans solve the same sequence discrimination almost immediately. The explicit test on whether bonobos represent stimulus sequences as an unstructured collection of memory traces was not informative as no differences were found between responses to the different probe tests. However, overall, this first empirical study of sequence discrimination on non-human great apes supports the idea that non-human animals, including the closest relatives to humans, lack a memory for stimulus sequences. This may be an ability that sets humans apart from other animals and could be one reason behind the origin of human culture.

Monday, September 25, 2023

Emergent analogical reasoning in large language models

Things are moving very fast in AI development. From Webb et al:
The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Friday, September 22, 2023

This is the New 'Real World'

For my own later reference, and hopefully of use to a few MindBlog readers,  I have edited, cut and pasted, and condensed from 3960 to 1933 words the latest brilliant article generated by Venkatesh Rao at https://studio.ribbonfarm.com/:

The word world, when preceded by the immodest adjective real, is a self-consciously anthropocentric one, unlike planet, or universe. To ask, what sort of world do we live in invites an inherently absurd answer when we ponder what kind of world we live in. but if enough people believe in an absurd world, absurd but consequential histories will unfold. And consequentiality, if not truth, perhaps deserves the adjective real. 

Not all individual worlds that in principle contribute to the real world are equally consequential… A familiar recent historical real world, the neoliberal world, was shaped more by the beliefs of central bankers than by the beliefs of UFO-trackers. You could argue that macroeconomic theories held by central bankers are not much less fictional than UFOs. But worlds built around belief in specific macroeconomic theories mattered more than ones built around belief in UFOs. In 2003 at least, it would have been safe to assume this  - it is no longer a safe assumption in 2023.

Of the few hundred  consciously shared worlds like religions, fandoms, and nationalisms that are significant, perhaps a couple of dozen matter strongly, and perhaps a dozen matter visibly, the other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.

This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives…The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.

The fact that some worlds survive a brutal winnowing process does not alter the fact that they remain anthropocentric is/ought conceits … A world that has made the cut to significance and consequentiality, to the level of mattering, must still survive its encounters with material, as opposed to social realities... For all the consequential might of the Catholic Church in the 17th century, it was Galileo’s much punier Eppur si muove world that eventually ended up mattering more. Truth eventually outweighed short-term consequentiality in the enduring composition of real.

It would take a couple of centuries for Galileo’s world to be counted among the ones that mattered, in shaping the real world. And the world of the Catholic Church, despite centuries of slow decline still matters..It is just that the real world has gotten much bigger in scope, and other worlds constituting it, like the one shaping the design of the iPhone 15, matter much more.

…to answer a question like what sort of world do we live in? is to craft an unwieldy composite portrait out of the dozen or so constituent worlds that matter at any given time …it is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing… the “real” world is not necessarily any more real than private fantasies. It is merely vastly more consequential — for a while.

When “the real world” goes away because we’ve stopped believing in it, as tends to happen every few decades, it can feel like material reality itself, rather than a socially constructed state of mind, has come undone. And we scramble to construct a new real world. It is a necessary human tendency. Humans need a real world to serve as a cognitive “outdoors” (and escape from “indoors”), even if they are not eternal or true. A shared place we can accuse each other of not living in, and being detached from…Humans will conspire to cobble together a dozen new fantasies and label it real world, and you and I will have to live in it too.

So it is worth asking the question, what sort of world do we live in? And it is worth actually constructing the answer, and giving it the name the real world, and using it to navigate life — for a while.

So let’s take a stab at it.

The real world of the early eighties was one defined by the boundary conditions of the Cold War, an Ozone hole, PCs, video games, Michael Jackson, a pre-internet social fabric, and no pictures of Uranus, Neptune, Pluto, or black holes shaping our sense of the place of our planet within the broader cosmos.

The real world that took shape in the nineties, the neoliberal world to which Margaret Thatcher declared there is no alternative (TINA), was one defined by the rise of the internet, unipolar geopolitics, the economic ascent of China, The Simpsons, Islamic terrorism, and perhaps most importantly, a sense of politics ceasing to matter against the backdrop of an unstoppable increase in global prosperity.

That real world began to wobble after 9/11, bust critical seams during the Great Recession, and started to go away in earnest after 2015, in the half-decade, which ended with the pandemic. The passing of the neoliberal world was experienced as a trauma across the world, even by those who managed to credibly declare themselves winners.

What has taken shape in the early 20s defies a believable characterization as real, for winners and losers alike. Declaring it weird  studiously avoids assessments of realness. Some, like me, go further and declare the world to be permaweird…the weirdness is here to stay.

Permaweird does not mean perma-unreal. The elusiveness of a “New Normal” does not mean no “New Real” can emerge, out of new, and learnable, patterns of agency and consequentiality…the forces shaping the New Real are becoming clear. Here is a list off the top of my head. It should be entirely unsurprising.

1 Energy transition
2 Aging population
3 Weird weather
4 Machine learning
5 Memefied politics
6 The slowing of Moore’s Law
7 Meaning crises (plural)
8 Stagnation of the West
9 Rise of the Rest
10 Post-ZIRP economics
11 Post-Covid supply chains
12 Climate refugee movements

You will notice that none the forces on the list is particularly new or individually very weird. What’s weird is the set as a whole, and the difficulty of putting them together into a notion of normalcy.

Forces though, are not worlds. We may trade in our gasoline-fueled cars for EVs, but we do not inhabit “the energy transition” the way we inhabit a world-idea like “neoliberalism” or “religion.”

Sometimes forces directly translate into consequential worlds. In the 1990s, the internet was a force shaping the real world, and also created a world — the inhabitable world of the very online — that was part of the then-emerging sense of “real.”

Sometimes forces indirectly create worlds. Low-interest rates created another important constituent world of the Old Real …Vast populations in liberal urban enclaves lived out ZIRPy lifestyles, eating their avocado toast, watching TED talks, riding sidewalk scooters, producing “content”, and perversely refusing to be rich enough to buy homes.

Something similar appears to be happening in response to the force of post-ZIRP economics. The public internet, dominated by vast global melting-pot platforms featuring vast culture wars, appears to be giving way to a mix of what I’ve called cozyweb enclaves and protocol media,…This world too, will be positioned to consequentially shape the New Real as strongly as the very online world shaped the Old Real.

I won’t try to provide careful arguments here, or justify my speculative inventory of forces, but here is my list of resulting worlds being carved out by them, which I have arrived at via a purely impressionistic leap of attempted synthesis. Together, these worlds constitute the New Real:

1 Climate refugee world
2 Disaster world (the set of places currently experiencing disaster conditions)
3 Dark Forest online world
4 Death-star world (centered on the assemblage of spaces controlled by declining wealth or power concentrations)
5 Non-English civilizational worlds (including Chinese and Indian)
6 Weird weather worlds
7 Non-institutional world (including, but not limited to, free-agent and blockchain-based worlds)
8 Trad Retvrn LARP world
9 Retiree world
10 Silicon realpolitik world
11 AI-experienced world
12 Resource-localism world (set of spaces shaped by a dominant scarce resource like energy or water)

These worlds are worlds because it is possible to imagine lifestyles entirely immersed in them. They are consequential worlds because each already has enough momentum and historical leverage to reshape the composite understanding of real. What climate refugees do in climate refugee world will shape what all of us do in the real world.

World 4 is worth some elaboration. In it I include almost everything that dominates current headlines and feels “real,” including spaces dominated by billionaires, governments, universities, and traditional media. Yet, despite the degree to which it dominates the current distribution of attention, my sense is that it has only a small and diminishing role to play in defining the New Real. When we use the phrase in the real world in the coming decade, we will not mainly be referring to World 4.

World 11 is also worth some elaboration. One reason I believe weirdness is here to stay is that the emerging ontologies of the New Real are neither entirely human in origin, nor likely to respect human desires for common-sense conceptual stability in “reality.

For the moment, AIs inhabit the world on our terms, relating to it through our categories. But it is already clear that they are not restricted to human categories, or even to categories expressible within human languages. Nor should they be, if we are to tap into their powers. They are limited by human ontology only to the extent that their presence in the world must be mediated by humans. … they will definitely evolve in ways that keep the real world permaweird.

Can we slap on a usefully descriptive short label onto the New Real, comparable to “Neoliberal World” or “Cold War World”?  

There is no such obviously dominant eigenvector of consequentiality in the New Real, but the most obvious candidate is probably global warming. So we might call the New Real the warming world. Somehow though, it doesn’t feel like warming shapes our experience of realness as clearly as its predecessors. Powerful though the calculus of climate change is, it operates via too many subtle degrees of indirection to shape our sense of the real. Still, I’ll leave the phrase there for your consideration.

An idiosyncratic personal candidate … is magic-realist world. A world that is consequentially real and permaweird is a world that feels magical and real at the same time, and is sustainably inhabitable: but only if you let go a craving for a sense of normalcy.

It offers unprecedented, god-like modes of agency that are available for almost anyone to exercise…The catch is this — attachment to normalcy equals learned helplessness in the face of all this agency. If you want to feel normal, almost none of the magical agency is available to you. An attachment to normalcy limits you to mere magical thinking, in the comforting company of an equally helpless majority. If you are willing to live with a sense of magical realism, a great deal more suddenly opens up.

This, I suspect, is the flip side of the idea that “we are as gods, and might as well get good at it.” There is no normal way to feel like a god. A magical being must necessarily experience the world as a magical doing. To experience the world as permaweird, is to experience it as a god.

This is not necessarily an optimistic thought. A real world, shaped by god-like humans, each operating by an idiosyncratic sense of their own magical agency, is not necessarily a good world, or a world that conjures up effective collective responses to its shared planetary problems.

But it is a world that does something, rather than nothing, and that’s a start.

Wednesday, September 20, 2023

Chemistry that regulates whether we stay with what we're doing or try something new

Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:  

Significance

Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.
Abstract
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.

Monday, September 18, 2023

Does "situational awareness" in AI's large language models mean consciousness?

The answer to that question would be no, for a number of reasons I won't go into, but Berglund et al. provide an interesting nudge in the direction of sentiece like behavior in some large language models by showing an example of situational awareness. They provide a link to their code. Here is their abstract:
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: this https URL.

Friday, September 15, 2023

What we seek to save when we seek to save the world

Yet anoather fascinating set of ideas from Venkatesh Rao that I want to save for myself by doing a MindBlog post of some clips from the piece.
...threats that provoke savior responses are generally more legible than the worlds that the saviors seek to save, or the mechanisms of destruction...I made up a 2x2 to classify the notions of worlds-to-save that people seem to have. The two axes are biological scope and temporal scope...Biolocial scope is the 'we' - the range of livings beings included as subjects in the definition of 'world'...Temporal scope is the range of time over which any act of world-saving seeks to preserve a historical consciousness associated with the biological scope. Worlds exist in time more than they do in space.
Constructing a 2x2 out of the biological and temporal scope dimensions we get the following view of worlds-to-save (blue), with representative savior types (green) who strive to save them.
Deep temporal scope combined with a narrow biological scope gives us civilizations for worlds, ethnocentrists as saviors. ..The End of the World is imagined in collapse-of-civilization terms.
Shallow temporal scope combined with a broad biological scope gives us technological modernity for a world, and cosmopolitans for saviors. A shallow temporal scope does not imply lack of historical imagination or curiosity. It merely means less of history being marked for saving...The End of the World is imagined in terms of rapid loss of scientific knowledge and technological capabilities.
Shallow temporal scope combined with narrow biological scope gives us a world defined by a stable landscape of modern nations...The End of the World is imagined in terms of descent to stateless anarchy. Failure is imagined as a Hobbesian condition of endemic (but not necessarily primitive or ignorant) warfare.
...the most fragile kind of world you can imagine trying to save: one with both a broad biological scope, and a deep temporal scope. This is the world as wildernesses...The End of the World is imagined in terms of ecological devastation and reduction of the planet to conditions incapable of sustaining most life. Failure is imagined in terms of forced extreme adaptation behaviors for the remnants of life. A rather unique version of this kind of world-saving impulse is one that contemplates species-suicide: viewing humans as the threat the world must be saved from. Saving the world in this vision requires eliminating humanity so the world can heal and recover.
I find myself primarily rooting for those in the technological modernity quadrant, and secondarily for those in the wildernesses quadrant. I find myself resisting the entire left half, but I’ve made my peace with their presence on the world-saving stage. I’m a cosmopolitan with Gaian tendencies.
I think, for a candidate world-to-save to be actually worth saving, its history must harbor inexhaustible mysteries. A world whose past is not mysterious has a future that is not interesting. If a world is exhausted of its historical mysteries, biological and/or temporal scope must be expanded to remystify and re-enchant it. This is one reason cosmopolitanism and the world-as-technological-modernity appeal to me. Its history is fundamentally mysterious in a way civilizational or national histories are not. And this is because the historical consciousness of technological modernity is, in my opinion, pre-civilizational in a way that is much closer to natural history than civilization ever gets.
For a cosmopolitan with Gaian tendencies, to save the modern world is to rewild and grow the global web of already slightly wild technological capabilities. Along with all the knowledge and resources — globally distributed in ways that cannot be cleanly factored across nations, civilizations, and other collective narcissisms — that is required to drive that web sustainably. And in the process, perhaps letting notions of civilization — including wishful notions of regulating and governing technology in ‘human centric’ ways — fall by the wayside if they lack the vitality and imagination to accommodate technological modernity

Wednesday, September 13, 2023

Constructing Self and World

There is a strong similarity between the predictive processing brain model that has been the subject of numerous Mind Blog Posts, and the operations that ChatGPT and other generative pre-trained transformer algorithms are performing, with the ‘priors’ of the predictive processing model being equivalent to the ‘pre-trained’ weightings of the generative transformer algorithms.  

The open and empty awareness of the non-dual perspective corresponds to the ‘generator’ component of the AI algorithms. It is what can begin to allow reification - rendering opaque rather than transparent - the self model and other products of the underlying content-free open awareness generator (such as our perceptions of trees, interoceptive signals, cultural rules, etc.). It enables seeing rather than being the glass window through which you are viewing the tree in the yard. The rationale of non-dual awareness is not to have ‘no-self.’ The ‘self’ prior is there because it is a very useful avatar for interactions. Rather, the non-dual perspective can enable a tweaking or re-construction of previously transparent priors - now rendered opaque - that lets go of their less useful components. The point of having an expanded 'no self' is to become aware of and refine the illusions or phantasies about what is in our internal and external worlds that rise from it.  

The paragraphs above derive  from my listening to one of Sam Harris’ podcasts in his “Making Sense” series titled “Constructing Self and World.” It was a conversation with Shamil Chandaria, who is a philanthropist, serial entrepreneur, technologist, and academic with multidisciplinary research interests. During the conversation a number of ideas I am familiar with were framed in a very useful way, and I wanted to put  them down and pass on to MindBlog readers the thumbnail summary above.

(The above is a repost of my May 31 post, which I recently stumbled onto and decided to rearrange.) 

Monday, September 11, 2023

Friday, September 08, 2023

Open access articles on consciousness (Thomas Metzinger and others)

I am overwhelmed by how much good stuff comes flooding into my email inbox, even after I have deleted 90% of it unopened. A newsletter from the Journal of Consciousness Studies points to Imprint Academic's open access articles. As am example I pass on the abstract of a Metzinger article that is right down my alley. It can be downloaded as a PDF file. 

Thomas Metzinger  

M-Autonomy

Abstract: What we traditionally call ‘conscious thought’ actually is a subpersonal process, and only rarely a form of mental action. The paradigmatic, standard form of conscious thought is non-agentive, because it lacks veto-control and involves an unnoticed loss of epistemic agency and goal-directed causal self-determination at the level of mental content. Conceptually, it must be described as an unintentional form of inner behaviour. Empirical research shows that we are not mentally autonomous subjects for about two thirds of our conscious lifetime, because while conscious cognition is unfolding, it often cannot be inhibited, suspended, or terminated. The instantiation of a stable first-person perspective as well as of certain necessary conditions of personhood turn out to be rare, graded, and dynamically variable properties of human beings. I argue that individual repre- sentational events only become part of a personal-level process by being functionally integrated into a specific form of transparent con- scious self-representation, the ‘epistemic agent model’ (EAM). The EAM may be the true origin of our consciously experienced first- person perspective.

Wednesday, September 06, 2023

Mapping the physical properties of odorant molecules to their perceptual characteristics.

I pass on parts of the editor's summary and the abstract of a foundational piece of work by Lee et al. that produces a map linking odorant molecular structures to their perceptual experience, analogous to the known maps for vision and hearing that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. I also pass on the first few (slightly edited) paragraphs of the paper that set context. Motivated readers can obtain a PDF of the article from me. (This work does not engage the problem, noted by Sagar et al., that the same volatile molecular may smell different to different people - the same odor can smell ‘fruity’ and ‘floral’ to one person and ‘musky’ and ‘decayed’ to another.)  

Summary

For vision and hearing, there are well-developed maps that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. The sense of olfaction does not yet have such a map. Using a graph neural network, Lee et al. developed a principal odor map (POM) that faithfully represents known perceptual hierarchies and distances. This map outperforms previously published models to the point that replacing a trained human’s responses with the model output would improve overall panel description. The POM coordinates were able to predict odor intensity and perceptual similarity, even though these perceptual features were not explicitly part of the model training.
Abstract
Mapping molecular structure to odor perception is a key challenge in olfaction. We used graph neural networks to generate a principal odor map (POM) that preserves perceptual relationships and enables odor quality prediction for previously uncharacterized odorants. The model was as reliable as a human in describing odor quality: On a prospective validation set of 400 out-of-sample odorants, the model-generated odor profile more closely matched the trained panel mean than did the median panelist. By applying simple, interpretable, theoretically rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
Initial paragraphs of text:
A fundamental problem in neuroscience is mapping the physical properties of a stimulus to perceptual characteristics. In vision, wavelength maps to color; in audition, frequency maps to pitch. By contrast, the mapping from chemical structures to olfactory percepts is poorly understood. Detailed and modality-specific maps such as the Commission Internationale de l’Elcairage (CIE) color space (1) and Fourier space (2) led to a better understanding of visual and auditory coding. Similarly, to better understand olfactory coding, the field of olfaction needs a better map.
Pitch increases monotonically with frequency. By contrast, the relationship between odor percept and odorant structure is riddled with discontinuities...frequently structurally similar pairs are not perceptually similar pairs. These discontinuities in the structure-odor relationship suggest that standard chemoinformatic representations of molecules—functional group counts, physical properties, molecular fingerprints, and so on—that have been used in recent odor modeling work are inadequate to map odor space.
To generate odor-relevant representations of molecules, we constructed a message passing neural network (MPNN), which is a specific type of graph neural network (GNN), to map chemical structures to odor percepts. Each molecule was represented as a graph, with each atom described by its valence, degree, hydrogen count, hybridization, formal charge, and atomic number. Each bond was described by its degree, its aromaticity, and whether it is in a ring. Unlike traditional fingerprinting techniques, which assign equal weight to all molecular fragments within a set bond radius, a GNN can optimize fragment weights for odor-specific applications. Neural networks have unlocked predictive modeling breakthroughs in diverse perceptual domains [e.g., natural images, faces, and sounds] and naturally produce intermediate representations of their input data that are functionally high-dimensional, data-driven maps. We used the final layer of the GNN (henceforth, “our model”) to directly predict odor qualities, and the penultimate layer of the model as a principal odor map (POM). The POM (i) faithfully represented known perceptual hierarchies and distances, (ii) extended to out-of-sample (hereafter, “novel”) odorants, (iii) was robust to discontinuities in structure-odor distances, and (iv) generalized to other olfactory tasks.
We curated a reference dataset of ~5000 molecules, each described by multiple odor labels (e.g., creamy, grassy), by combining the Good Scents and Leffingwell & Associates (GS-LF) flavor and fragrance databases. To train our model, we optimized model parameters with a weighted-cross entropy loss over 150 epochs using Adam with a learning rate decaying from 5 × 10−4 to 1 × 10−5 and a batch size of 128. The GS-LF dataset was split 80/20 training/test, and the 80% training set further subdivided into five cross-validation splits. These cross-validation splits were used to optimize hyperparameters using Vizier, a Bayesian optimization algorithm, by tuning across 1000 trials. Details about model architecture and hyperparameters are given in the supplementary methods. When properly hyperparameter-tuned, performance was found to be robust across many model architectures. We present results for the model with the highest mean area under the receiver operating characteristic curve (AUROC) on the cross-validation set (AUROC = 0.89).

Monday, September 04, 2023

Inhalation boosts perceptual awareness and decision speed

From Ludovic Molle et al. (open source):  

Significance

Breathing is a ubiquitous biological rhythm in animal life. However, little is known about its effect on consciousness and decision-making. Here, we measured the respiratory rhythm of humans performing a near-threshold discrimination experiment. We show that inhalation, compared with exhalation, improves perceptual awareness and accelerates decision-making while leaving accuracy unaffected.
Summary
The emergence of consciousness is one of biology’s biggest mysteries. During the past two decades, a major effort has been made to identify the neural correlates of consciousness, but in comparison, little is known about the physiological mechanisms underlying first-person subjective experience. Attention is considered the gateway of information to consciousness. Recent work suggests that the breathing phase (i.e., inhalation vs. exhalation) modulates attention, in such a way that attention directed toward exteroceptive information would increase during inhalation. One key hypothesis emerging from this work is that inhalation would improve perceptual awareness and near-threshold decision-making. The present study directly tested this hypothesis. We recorded the breathing rhythms of 30 humans performing a near-threshold decision-making task, in which they had to decide whether a liminal Gabor was tilted to the right or the left (objective decision task) and then to rate their perceptual awareness of the Gabor orientation (subjective decision task). In line with our hypothesis, the data revealed that, relative to exhalation, inhalation improves perceptual awareness and speeds up objective decision-making, without impairing accuracy. Overall, the present study builds on timely questions regarding the physiological mechanisms underlying consciousness and shows that breathing shapes the emergence of subjective experience and decision-making.

Friday, September 01, 2023

The fragility of artists’ reputations from 1795 to 2020

Zhang et al. do an interesting study using natural language processing to measure reputation over time:  

Significance

This study uses machine-learning techniques and a historical corpus to examine the evolution of artists’ reputations over time. Contrary to popular wisdom, we find that most artists’ reputations peak just before their death, and then start to decline. This decline is strongest for artists who were most popular during their lifetime. We show that artists’ reduced visibility and changes in the public’s aesthetic taste explain much of the posthumous reputation decline. This study highlights how social perception of historical figures can shift and emphasizes the vulnerability of human reputation. Methodologically, the study illustrates an application of natural language processing to measure reputation over time.
Abstract
This study explores the longevity of artistic reputation. We empirically examine whether artists are more- or less-venerated after their death. We construct a massive historical corpus spanning 1795 to 2020 and build separate word-embedding models for each five-year period to examine how the reputations of over 3,300 famous artists—including painters, architects, composers, musicians, and writers—evolve after their death. We find that most artists gain their highest reputation right before their death, after which it declines, losing nearly one SD every century. This posthumous decline applies to artists in all domains, includes those who died young or unexpectedly, and contradicts the popular view that artists’ reputations endure. Contrary to the Matthew effect, the reputational decline is the steepest for those who had the highest reputations while alive. Two mechanisms—artists’ reduced visibility and the public’s changing taste—are associated with much of the posthumous reputational decline. This study underscores the fragility of human reputation and shows how the collective memory of artists unfolds over time.

Wednesday, August 30, 2023

Neuron–astrocyte networks might perform the core computations performed by AI transformer blocks

Fascinating ideas from Kozachkov et al. Their text contains primers on Astrocyte biology and the transformers found in AI Generative Pre-trained Transformers such as ChatGPT.  

Significance

Transformers have become the default choice of neural architecture for many machine learning applications. Their success across multiple domains such as language, vision, and speech raises the question: How can one build Transformers using biological computational units? At the same time, in the glial community, there is gradually accumulating evidence that astrocytes, formerly believed to be passive house-keeping cells in the brain, in fact play an important role in the brain’s information processing and computation. In this work we hypothesize that neuron–astrocyte networks can naturally implement the core computation performed by the Transformer block in AI. The omnipresence of astrocytes in almost any brain area may explain the success of Transformers across a diverse set of information domains and computational tasks.
Abstract
Glial cells account for between 50% and 90% of all human brain cells, and serve a variety of important developmental, structural, and metabolic functions. Recent experimental efforts suggest that astrocytes, a type of glial cell, are also directly involved in core cognitive processes such as learning and memory. While it is well established that astrocytes and neurons are connected to one another in feedback loops across many timescales and spatial scales, there is a gap in understanding the computational role of neuron–astrocyte interactions. To help bridge this gap, we draw on recent advances in AI and astrocyte imaging technology. In particular, we show that neuron–astrocyte networks can naturally perform the core computation of a Transformer, a particularly successful type of AI architecture. In doing so, we provide a concrete, normative, and experimentally testable account of neuron–astrocyte communication. Because Transformers are so successful across a wide variety of task domains, such as language, vision, and audition, our analysis may help explain the ubiquity, flexibility, and power of the brain’s neuron–astrocyte networks.

Monday, August 28, 2023

A shared novelty-seeking basis for creativity and curiosity

I pass on the abstract of a target article having the title of this post, sent to me by Behavioral and Brain Science. I'm reading through it, and would be willing to send a PDF of the article to motivated MindBlog readers who wish to check it out.
Curiosity and creativity are central pillars of human growth and invention. While they have been studied extensively in isolation, the relationship between them has not yet been established. We propose that curiosity and creativity both emanate from the same mechanism of novelty-seeking. We first present a synthesis showing that curiosity and creativity are affected similarly by a number of key cognitive faculties such as memory, cognitive control, attention, and reward. We then review empirical evidence from neuroscience research, indicating that the same brain regions are involved in both curiosity and creativity, focusing on the interplay between three major brain networks: the default-mode network, the salience network, and the executive control network. After substantiating the link between curiosity and creativity, we propose a novelty- seeking model (NSM) that underlies them both and suggest that the manifestation of the NSM is governed by one’s state of mind (SoM).

Friday, August 25, 2023

The promise and pitfalls of the metaverse for science

A curious open-sourse bit of hand waving and gibble-gabble about the metaverse. I pass on the first two paragraphs and links to its references.
Some technology companies and media have anointed the metaverse as the future of the internet. Advances in virtual reality devices and high-speed connections, combined with the acceptance of remote work during the COVID-19 pandemic, have brought considerable attention to the metaverse as more than a mere curiosity for gaming. Despite substantial investments and ambitiously optimistic pronouncements, the future of the metaverse remains uncertain: its definitions and boundaries alternate among dystopian visions, a mixture of technologies (for example, Web3 and blockchain) and entertainment playgrounds.
As a better-defined and more-coherent realization of the metaverse continues to evolve, scientists have already started bringing their laboratories to 3D virtual spaces, running experiments with virtual reality and augmenting knowledge by using immersive representations. We consider how scientists can flexibly and responsibly leverage the metaverse, prepare for its uncertain future and avoid some of its pitfalls.

Wednesday, August 23, 2023

Why citizens vote away the democracies they claim to cherish.

Here is an interesting bit of research from Braley et al. reported in Nature Human Behaviour. Their abstract:
Around the world, citizens are voting away the democracies they claim to cherish. Here we present evidence that this behaviour is driven in part by the belief that their opponents will undermine democracy first. In an observational study (N = 1,973), we find that US partisans are willing to subvert democratic norms to the extent that they believe opposing partisans are willing to do the same. In experimental studies (N = 2,543, N = 1,848), we revealed to partisans that their opponents are more committed to democratic norms than they think. As a result, the partisans became more committed to upholding democratic norms themselves and less willing to vote for candidates who break these norms. These findings suggest that aspiring autocrats may instigate democratic backsliding by accusing their opponents of subverting democracy and that we can foster democratic stability by informing partisans about the other side’s commitment to democracy.

Monday, August 21, 2023

Never-Ending Stories - a survival tactic for uncertain times

I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading.  I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival.  I post the clips here for my own future use, also thinking it might interest some MindBlog readers:

Never-Ending Stories

Marching beat-by-beat into a Purposeless infinite horizon

During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:

Take it one day at a time

Take it one step at a time

Sleep on it; morning is wiser than evening

Count to ten

Or even just breathe

All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.

These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.

There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.

So why is this common advice? And is it good advice?

I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.

Protocol Narratives, Narrative Protocols

Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:

A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.

A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.

Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.

This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.

Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.

Protocols and Purpose

In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes. Let’s distinguish two kinds of motive force in any narrative:

1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.

2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.

In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.

What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.

There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”

The Protocol-Purpose Tradeoff

During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.

The question remains though. When are such assumptions valid?

This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.

Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.

As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.

In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.

This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here. 

To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).

How would you use it?

This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.

The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.

*****

A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”

I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.

I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.

So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.

***********


Step-by-Step is Hard-to-Kill

While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.

The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.

The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.

I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.

Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”

Protocol narratives only need functions to keep going.

They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.

While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).

During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.

I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.

This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.

To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.

Thursday, August 17, 2023

Born Rich

I want to pass on a few slightly edited clips from an interesting essay in The Dispatch by conservative writer Kevin Williamson that a friend pointed me to. And then I pass on the comment on Williamson's ideas offered by another friend: "Wow, this one’s a big gulp of the Kool-Aid. This thesis is patently untrue. As the one percent continues to grow in our current corporate low-tax, constantly crippled regulated business environment, the fallacy of this perspective grows along with it. This is exactly the thinking that book I recommended discusses (Oreskes and Conway: "How American business taught us to loathe government and love the free market.") Our democracy as it is currently functioning is not a Great Leveler.
One of the most distasteful aspects of our politics is the extent to which it is so obviously driven by envy, which is what 99 percent of that “privileged elite” talk ends up being about. But I suppose I am the wrong person to complain about that, because I was born rich, but I don't mean rich in the usual money sense.
Our intellectual and political life is dominated by a relatively narrow class of what we might call intellectually tall people, high-IQ people with diverse socioeconomic backgrounds. And while a great many of them believe that inherited wealth is profoundly unfair, very few of them have any similar thoughts to share about the social role of inherited intelligence.
One of the hardest things to drill into the noggins of the American ruling class (and let’s not pretend that there isn’t one, even if it isn’t exactly what you might expect) is that there is no more merit in being born with certain economically valuable intellectual talents than there is in being born tall, or with curly hair—or white, for that matter. Inherited wealth is an enormous factor in the lives of a relatively small number of Americans and a more modest one in the lives of a larger number, but inherited brainpower is the unearned asset that matters most. We live in a very competitive, very connected world, one with very, very efficient labor markets...We have pretty effective tools (including standardized testing) that are very useful for reaching far, wide, and deep into the population to identify intellectual high-fliers and to direct them into educational and career paths that will give them the chance to make the most out of their lives. There probably is no better place in the world to be born poor and smart—but there is no more merit in being born smart than there is blame in being born poor.
The American “meritocracy” is based to a considerable extent on the generally unspoken proposition that intelligence is merit, and that smart people deserve their success in a special way. Our country is run by smart people, and the smart people in charge very much want to believe that they are where they are because of merit, because of the exemplary lives they have led, not because of some unearned hereditary trait that is the intellectual equivalent of a trust fund. The 1994 book "The Bell Curve" was an attempt to explore the paradox of the hereditary “meritocracy” in a serious way, and it was shouted down by—this was not coincidental—the class of people whose self-conception as a meritorious elite was most directly threatened by the authors’ hypothesis.
Understanding the privileges that go along with inherited intellectual ability as being in a moral sense very much like the privileges that go along with inherited wealth (or an inherited social-racial position or whatever privilege you like) opens up a radical and disruptive perspective on American public life—and draws attention to social situations that, even if understood to be unfair because of the role of hereditary advantage, are not open to resolution through redistributive taxes or affirmative action or anything like that. We aren’t going to mandate that half of the brain surgeons or theoretical physicists have below-average IQs.
Being a conservative, I believe that a healthy society necessarily contains a great deal of organic, authentic diversity. Being a realist, I also believe that this diversity comes with hierarchy. As Russell Kirk observed:
Conservatives pay attention to the principle of variety. They feel affection for the proliferating intricacy of long-established social institutions and modes of life, as distinguished from the narrowing uniformity and deadening egalitarianism of radical systems. For the preservation of a healthy diversity in any civilization, there must survive orders and classes, differences in material condition, and many sorts of inequality. The only true forms of equality are equality at the Last Judgment and equality before a just court of law; all other attempts at levelling must lead, at best, to social stagnation. Society requires honest and able leadership; and if natural and institutional differences are destroyed, presently some tyrant or host of squalid oligarchs will create new forms of inequality.

Tuesday, August 15, 2023

Human History gets a rewrite.

I want to point to two articles I have enjoyed reading, both describing the recent book by Graeber and Wengrowa; “The Dawn of Everything: A New History of Humanity.” The review by Deresiewicz is in The Atlantic Magazine, and The New Yorker Review " is by Lewis-Krause. Some clips from Deresiewicz:
The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally...The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).
It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on. Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring.
Is “civilization” worth it, the authors want to know, if civilization—ancient Egypt, the Aztecs, imperial Rome, the modern regime of bureaucratic capitalism enforced by state violence—means the loss of what they see as our three basic freedoms: the freedom to disobey, the freedom to go somewhere else, and the freedom to create new social arrangements? Or does civilization rather mean “mutual aid, social co-operation, civic activism, hospitality [and] simply caring for others”?
These are questions that Graeber, a committed anarchist—an exponent not of anarchy but of anarchism, the idea that people can get along perfectly well without governments—asked throughout his career. The Dawn of Everything is framed by an account of what the authors call the “indigenous critique.” In a remarkable chapter, they describe the encounter between early French arrivals in North America, primarily Jesuit missionaries, and a series of Native intellectuals—individuals who had inherited a long tradition of political conflict and debate and who had thought deeply and spoke incisively on such matters as “generosity, sociability, material wealth, crime, punishment and liberty.”
The Indigenous critique, as articulated by these figures in conversation with their French interlocutors, amounted to a wholesale condemnation of French—and, by extension, European—society: its incessant competition, its paucity of kindness and mutual care, its religious dogmatism and irrationalism, and most of all, its horrific inequality and lack of freedom. The authors persuasively argue that Indigenous ideas, carried back and publicized in Europe, went on to inspire the Enlightenment (the ideals of freedom, equality, and democracy, they note, had theretofore been all but absent from the Western philosophical tradition). They go further, making the case that the conventional account of human history as a saga of material progress was developed in reaction to the Indigenous critique in order to salvage the honor of the West. We’re richer, went the logic, so we’re better. The authors ask us to rethink what better might actually mean.