Friday, September 29, 2023

AI, a boon for science and a disaster for creatives

The Sept. 16 issue of the Economist has two excellent articles: How artificial intelligence can revolutionise science and How scientists are using artificial intelligence. I pass on here some edited clips from the first of these articles. I also want to point to much less benign commentary on how AI is moving toward threatening the livelihoods of creators of music, art, and literature: The Internet Is About to Get Much Worse.  

Could AI turbocharge scientific progress and lead to a golden age of discovery?

Some believe that AI can turbocharge scientific progress and lead to a golden age of discovery...Such claims provide a useful counterbalance to fears about large-scale unemployment and killer robots.
Many previous technologies have been falsely hailed as panaceas. The electric telegraph was lauded in the 1850s as a herald of world peace, as were aircraft in the 1900s; pundits in the 1990s said the internet would reduce inequality and eradicate nationalism...but there have been several periods in history when new approaches and new tools did indeed help bring about bursts of world-changing scientific discovery and innovation.
In the 17th century microscopes and telescopes opened up new vistas of discovery and encouraged researchers to favour their own observations over the received wisdom of antiquity, while the introduction of scientific journals gave them new ways to share and publicise their findings. The result was rapid progress in astronomy, physics and other fields, and new inventions from the pendulum clock to the steam engine—the prime mover of the Industrial Revolution.
Then, starting in the late 19th century, the establishment of research laboratories, which brought together ideas, people and materials on an industrial scale, gave rise to further innovations such as artificial fertiliser, pharmaceuticals and the transistor, the building block of the computer..the journal and the laboratory went further still: they altered scientific practice itself and unlocked more powerful means of making discoveries, by allowing people and ideas to mingle in new ways and on a larger scale. AI, too, has the potential to set off such a transformation.
Two areas in particular look promising. The first is “literature-based discovery” (LBD), which involves analysing existing scientific literature, using ChatGPT-style language analysis, to look for new hypotheses, connections or ideas that humans may have missed. LBD is showing promise in identifying new experiments to try—and even suggesting potential research collaborators.
The second area is “robot scientists”, also known as “self-driving labs”. These are robotic systems that use AI to form new hypotheses, based on analysis of existing data and literature, and then test those hypotheses by performing hundreds or thousands of experiments, in fields including systems biology and materials science. Unlike human scientists, robots are less attached to previous results, less driven by bias—and, crucially, easy to replicate.
In 1665, during a period of rapid scientific progress, Robert Hooke, an English polymath, described the advent of new scientific instruments such as the microscope and telescope as “the adding of artificial organs to the natural”. They let researchers explore previously inaccessible realms and discover things in new ways, “with prodigious benefit to all sorts of useful knowledge”. For Hooke’s modern-day successors, the adding of artificial intelligence to the scientific toolkit is poised to do the same in the coming years—with similarly world-changing results.

Wednesday, September 27, 2023

Memory for stimulus sequences unique to humans?

Continuing the never-ending quest to find fundamental human abilities or behaviors lacking in other animals (many, like the mirror recognition test, have failed), Lind et al. find that bonobo chimpanzees fail to remember the order of two stimuli even after 2,000 trials. I pass on their abstract below. They suggest the ability to remember sequences may be an ability that sets humans apart from other animals. Hmmmmm, maybe the bonobos just don't think the task is important? They should try the test on ravens and crows that have shown amazing smarts...
Identifying cognitive capacities underlying the human evolutionary transition is challenging, and many hypotheses exist for what makes humans capable of, for example, producing and understanding language, preparing meals, and having culture on a grand scale. Instead of describing processes whereby information is processed, recent studies have suggested that there are key differences between humans and other animals in how information is recognized and remembered. Such constraints may act as a bottleneck for subsequent information processing and behavior, proving important for understanding differences between humans and other animals. We briefly discuss different sequential aspects of cognition and behavior and the importance of distinguishing between simultaneous and sequential input, and conclude that explicit tests on non-human great apes have been lacking. Here, we test the memory for stimulus sequences-hypothesis by carrying out three tests on bonobos and one test on humans. Our results show that bonobos’ general working memory decays rapidly and that they fail to learn the difference between the order of two stimuli even after more than 2,000 trials, corroborating earlier findings in other animals. However, as expected, humans solve the same sequence discrimination almost immediately. The explicit test on whether bonobos represent stimulus sequences as an unstructured collection of memory traces was not informative as no differences were found between responses to the different probe tests. However, overall, this first empirical study of sequence discrimination on non-human great apes supports the idea that non-human animals, including the closest relatives to humans, lack a memory for stimulus sequences. This may be an ability that sets humans apart from other animals and could be one reason behind the origin of human culture.

Monday, September 25, 2023

Emergent analogical reasoning in large language models

Things are moving very fast in AI development. From Webb et al:
The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Friday, September 22, 2023

This is the New 'Real World'

For my own later reference, and hopefully of use to a few MindBlog readers,  I have edited, cut and pasted, and condensed from 3960 to 1933 words the latest brilliant article generated by Venkatesh Rao at

The word world, when preceded by the immodest adjective real, is a self-consciously anthropocentric one, unlike planet, or universe. To ask, what sort of world do we live in invites an inherently absurd answer when we ponder what kind of world we live in. but if enough people believe in an absurd world, absurd but consequential histories will unfold. And consequentiality, if not truth, perhaps deserves the adjective real. 

Not all individual worlds that in principle contribute to the real world are equally consequential… A familiar recent historical real world, the neoliberal world, was shaped more by the beliefs of central bankers than by the beliefs of UFO-trackers. You could argue that macroeconomic theories held by central bankers are not much less fictional than UFOs. But worlds built around belief in specific macroeconomic theories mattered more than ones built around belief in UFOs. In 2003 at least, it would have been safe to assume this  - it is no longer a safe assumption in 2023.

Of the few hundred  consciously shared worlds like religions, fandoms, and nationalisms that are significant, perhaps a couple of dozen matter strongly, and perhaps a dozen matter visibly, the other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.

This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives…The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.

The fact that some worlds survive a brutal winnowing process does not alter the fact that they remain anthropocentric is/ought conceits … A world that has made the cut to significance and consequentiality, to the level of mattering, must still survive its encounters with material, as opposed to social realities... For all the consequential might of the Catholic Church in the 17th century, it was Galileo’s much punier Eppur si muove world that eventually ended up mattering more. Truth eventually outweighed short-term consequentiality in the enduring composition of real.

It would take a couple of centuries for Galileo’s world to be counted among the ones that mattered, in shaping the real world. And the world of the Catholic Church, despite centuries of slow decline still matters..It is just that the real world has gotten much bigger in scope, and other worlds constituting it, like the one shaping the design of the iPhone 15, matter much more.

…to answer a question like what sort of world do we live in? is to craft an unwieldy composite portrait out of the dozen or so constituent worlds that matter at any given time …it is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing… the “real” world is not necessarily any more real than private fantasies. It is merely vastly more consequential — for a while.

When “the real world” goes away because we’ve stopped believing in it, as tends to happen every few decades, it can feel like material reality itself, rather than a socially constructed state of mind, has come undone. And we scramble to construct a new real world. It is a necessary human tendency. Humans need a real world to serve as a cognitive “outdoors” (and escape from “indoors”), even if they are not eternal or true. A shared place we can accuse each other of not living in, and being detached from…Humans will conspire to cobble together a dozen new fantasies and label it real world, and you and I will have to live in it too.

So it is worth asking the question, what sort of world do we live in? And it is worth actually constructing the answer, and giving it the name the real world, and using it to navigate life — for a while.

So let’s take a stab at it.

The real world of the early eighties was one defined by the boundary conditions of the Cold War, an Ozone hole, PCs, video games, Michael Jackson, a pre-internet social fabric, and no pictures of Uranus, Neptune, Pluto, or black holes shaping our sense of the place of our planet within the broader cosmos.

The real world that took shape in the nineties, the neoliberal world to which Margaret Thatcher declared there is no alternative (TINA), was one defined by the rise of the internet, unipolar geopolitics, the economic ascent of China, The Simpsons, Islamic terrorism, and perhaps most importantly, a sense of politics ceasing to matter against the backdrop of an unstoppable increase in global prosperity.

That real world began to wobble after 9/11, bust critical seams during the Great Recession, and started to go away in earnest after 2015, in the half-decade, which ended with the pandemic. The passing of the neoliberal world was experienced as a trauma across the world, even by those who managed to credibly declare themselves winners.

What has taken shape in the early 20s defies a believable characterization as real, for winners and losers alike. Declaring it weird  studiously avoids assessments of realness. Some, like me, go further and declare the world to be permaweird…the weirdness is here to stay.

Permaweird does not mean perma-unreal. The elusiveness of a “New Normal” does not mean no “New Real” can emerge, out of new, and learnable, patterns of agency and consequentiality…the forces shaping the New Real are becoming clear. Here is a list off the top of my head. It should be entirely unsurprising.

1 Energy transition
2 Aging population
3 Weird weather
4 Machine learning
5 Memefied politics
6 The slowing of Moore’s Law
7 Meaning crises (plural)
8 Stagnation of the West
9 Rise of the Rest
10 Post-ZIRP economics
11 Post-Covid supply chains
12 Climate refugee movements

You will notice that none the forces on the list is particularly new or individually very weird. What’s weird is the set as a whole, and the difficulty of putting them together into a notion of normalcy.

Forces though, are not worlds. We may trade in our gasoline-fueled cars for EVs, but we do not inhabit “the energy transition” the way we inhabit a world-idea like “neoliberalism” or “religion.”

Sometimes forces directly translate into consequential worlds. In the 1990s, the internet was a force shaping the real world, and also created a world — the inhabitable world of the very online — that was part of the then-emerging sense of “real.”

Sometimes forces indirectly create worlds. Low-interest rates created another important constituent world of the Old Real …Vast populations in liberal urban enclaves lived out ZIRPy lifestyles, eating their avocado toast, watching TED talks, riding sidewalk scooters, producing “content”, and perversely refusing to be rich enough to buy homes.

Something similar appears to be happening in response to the force of post-ZIRP economics. The public internet, dominated by vast global melting-pot platforms featuring vast culture wars, appears to be giving way to a mix of what I’ve called cozyweb enclaves and protocol media,…This world too, will be positioned to consequentially shape the New Real as strongly as the very online world shaped the Old Real.

I won’t try to provide careful arguments here, or justify my speculative inventory of forces, but here is my list of resulting worlds being carved out by them, which I have arrived at via a purely impressionistic leap of attempted synthesis. Together, these worlds constitute the New Real:

1 Climate refugee world
2 Disaster world (the set of places currently experiencing disaster conditions)
3 Dark Forest online world
4 Death-star world (centered on the assemblage of spaces controlled by declining wealth or power concentrations)
5 Non-English civilizational worlds (including Chinese and Indian)
6 Weird weather worlds
7 Non-institutional world (including, but not limited to, free-agent and blockchain-based worlds)
8 Trad Retvrn LARP world
9 Retiree world
10 Silicon realpolitik world
11 AI-experienced world
12 Resource-localism world (set of spaces shaped by a dominant scarce resource like energy or water)

These worlds are worlds because it is possible to imagine lifestyles entirely immersed in them. They are consequential worlds because each already has enough momentum and historical leverage to reshape the composite understanding of real. What climate refugees do in climate refugee world will shape what all of us do in the real world.

World 4 is worth some elaboration. In it I include almost everything that dominates current headlines and feels “real,” including spaces dominated by billionaires, governments, universities, and traditional media. Yet, despite the degree to which it dominates the current distribution of attention, my sense is that it has only a small and diminishing role to play in defining the New Real. When we use the phrase in the real world in the coming decade, we will not mainly be referring to World 4.

World 11 is also worth some elaboration. One reason I believe weirdness is here to stay is that the emerging ontologies of the New Real are neither entirely human in origin, nor likely to respect human desires for common-sense conceptual stability in “reality.

For the moment, AIs inhabit the world on our terms, relating to it through our categories. But it is already clear that they are not restricted to human categories, or even to categories expressible within human languages. Nor should they be, if we are to tap into their powers. They are limited by human ontology only to the extent that their presence in the world must be mediated by humans. … they will definitely evolve in ways that keep the real world permaweird.

Can we slap on a usefully descriptive short label onto the New Real, comparable to “Neoliberal World” or “Cold War World”?  

There is no such obviously dominant eigenvector of consequentiality in the New Real, but the most obvious candidate is probably global warming. So we might call the New Real the warming world. Somehow though, it doesn’t feel like warming shapes our experience of realness as clearly as its predecessors. Powerful though the calculus of climate change is, it operates via too many subtle degrees of indirection to shape our sense of the real. Still, I’ll leave the phrase there for your consideration.

An idiosyncratic personal candidate … is magic-realist world. A world that is consequentially real and permaweird is a world that feels magical and real at the same time, and is sustainably inhabitable: but only if you let go a craving for a sense of normalcy.

It offers unprecedented, god-like modes of agency that are available for almost anyone to exercise…The catch is this — attachment to normalcy equals learned helplessness in the face of all this agency. If you want to feel normal, almost none of the magical agency is available to you. An attachment to normalcy limits you to mere magical thinking, in the comforting company of an equally helpless majority. If you are willing to live with a sense of magical realism, a great deal more suddenly opens up.

This, I suspect, is the flip side of the idea that “we are as gods, and might as well get good at it.” There is no normal way to feel like a god. A magical being must necessarily experience the world as a magical doing. To experience the world as permaweird, is to experience it as a god.

This is not necessarily an optimistic thought. A real world, shaped by god-like humans, each operating by an idiosyncratic sense of their own magical agency, is not necessarily a good world, or a world that conjures up effective collective responses to its shared planetary problems.

But it is a world that does something, rather than nothing, and that’s a start.

Wednesday, September 20, 2023

Chemistry that regulates whether we stay with what we're doing or try something new

Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:  


Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.

Monday, September 18, 2023

Does "situational awareness" in AI's large language models mean consciousness?

The answer to that question would be no, for a number of reasons I won't go into, but Berglund et al. provide an interesting nudge in the direction of sentiece like behavior in some large language models by showing an example of situational awareness. They provide a link to their code. Here is their abstract:
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: this https URL.

Friday, September 15, 2023

What we seek to save when we seek to save the world

Yet anoather fascinating set of ideas from Venkatesh Rao that I want to save for myself by doing a MindBlog post of some clips from the piece.
...threats that provoke savior responses are generally more legible than the worlds that the saviors seek to save, or the mechanisms of destruction...I made up a 2x2 to classify the notions of worlds-to-save that people seem to have. The two axes are biological scope and temporal scope...Biolocial scope is the 'we' - the range of livings beings included as subjects in the definition of 'world'...Temporal scope is the range of time over which any act of world-saving seeks to preserve a historical consciousness associated with the biological scope. Worlds exist in time more than they do in space.
Constructing a 2x2 out of the biological and temporal scope dimensions we get the following view of worlds-to-save (blue), with representative savior types (green) who strive to save them.
Deep temporal scope combined with a narrow biological scope gives us civilizations for worlds, ethnocentrists as saviors. ..The End of the World is imagined in collapse-of-civilization terms.
Shallow temporal scope combined with a broad biological scope gives us technological modernity for a world, and cosmopolitans for saviors. A shallow temporal scope does not imply lack of historical imagination or curiosity. It merely means less of history being marked for saving...The End of the World is imagined in terms of rapid loss of scientific knowledge and technological capabilities.
Shallow temporal scope combined with narrow biological scope gives us a world defined by a stable landscape of modern nations...The End of the World is imagined in terms of descent to stateless anarchy. Failure is imagined as a Hobbesian condition of endemic (but not necessarily primitive or ignorant) warfare.
...the most fragile kind of world you can imagine trying to save: one with both a broad biological scope, and a deep temporal scope. This is the world as wildernesses...The End of the World is imagined in terms of ecological devastation and reduction of the planet to conditions incapable of sustaining most life. Failure is imagined in terms of forced extreme adaptation behaviors for the remnants of life. A rather unique version of this kind of world-saving impulse is one that contemplates species-suicide: viewing humans as the threat the world must be saved from. Saving the world in this vision requires eliminating humanity so the world can heal and recover.
I find myself primarily rooting for those in the technological modernity quadrant, and secondarily for those in the wildernesses quadrant. I find myself resisting the entire left half, but I’ve made my peace with their presence on the world-saving stage. I’m a cosmopolitan with Gaian tendencies.
I think, for a candidate world-to-save to be actually worth saving, its history must harbor inexhaustible mysteries. A world whose past is not mysterious has a future that is not interesting. If a world is exhausted of its historical mysteries, biological and/or temporal scope must be expanded to remystify and re-enchant it. This is one reason cosmopolitanism and the world-as-technological-modernity appeal to me. Its history is fundamentally mysterious in a way civilizational or national histories are not. And this is because the historical consciousness of technological modernity is, in my opinion, pre-civilizational in a way that is much closer to natural history than civilization ever gets.
For a cosmopolitan with Gaian tendencies, to save the modern world is to rewild and grow the global web of already slightly wild technological capabilities. Along with all the knowledge and resources — globally distributed in ways that cannot be cleanly factored across nations, civilizations, and other collective narcissisms — that is required to drive that web sustainably. And in the process, perhaps letting notions of civilization — including wishful notions of regulating and governing technology in ‘human centric’ ways — fall by the wayside if they lack the vitality and imagination to accommodate technological modernity

Wednesday, September 13, 2023

Constructing Self and World

There is a strong similarity between the predictive processing brain model that has been the subject of numerous Mind Blog Posts, and the operations that ChatGPT and other generative pre-trained transformer algorithms are performing, with the ‘priors’ of the predictive processing model being equivalent to the ‘pre-trained’ weightings of the generative transformer algorithms.  

The open and empty awareness of the non-dual perspective corresponds to the ‘generator’ component of the AI algorithms. It is what can begin to allow reification - rendering opaque rather than transparent - the self model and other products of the underlying content-free open awareness generator (such as our perceptions of trees, interoceptive signals, cultural rules, etc.). It enables seeing rather than being the glass window through which you are viewing the tree in the yard. The rationale of non-dual awareness is not to have ‘no-self.’ The ‘self’ prior is there because it is a very useful avatar for interactions. Rather, the non-dual perspective can enable a tweaking or re-construction of previously transparent priors - now rendered opaque - that lets go of their less useful components. The point of having an expanded 'no self' is to become aware of and refine the illusions or phantasies about what is in our internal and external worlds that rise from it.  

The paragraphs above derive  from my listening to one of Sam Harris’ podcasts in his “Making Sense” series titled “Constructing Self and World.” It was a conversation with Shamil Chandaria, who is a philanthropist, serial entrepreneur, technologist, and academic with multidisciplinary research interests. During the conversation a number of ideas I am familiar with were framed in a very useful way, and I wanted to put  them down and pass on to MindBlog readers the thumbnail summary above.

(The above is a repost of my May 31 post, which I recently stumbled onto and decided to rearrange.) 

Monday, September 11, 2023

Friday, September 08, 2023

Open access articles on consciousness (Thomas Metzinger and others)

I am overwhelmed by how much good stuff comes flooding into my email inbox, even after I have deleted 90% of it unopened. A newsletter from the Journal of Consciousness Studies points to Imprint Academic's open access articles. As am example I pass on the abstract of a Metzinger article that is right down my alley. It can be downloaded as a PDF file. 

Thomas Metzinger  


Abstract: What we traditionally call ‘conscious thought’ actually is a subpersonal process, and only rarely a form of mental action. The paradigmatic, standard form of conscious thought is non-agentive, because it lacks veto-control and involves an unnoticed loss of epistemic agency and goal-directed causal self-determination at the level of mental content. Conceptually, it must be described as an unintentional form of inner behaviour. Empirical research shows that we are not mentally autonomous subjects for about two thirds of our conscious lifetime, because while conscious cognition is unfolding, it often cannot be inhibited, suspended, or terminated. The instantiation of a stable first-person perspective as well as of certain necessary conditions of personhood turn out to be rare, graded, and dynamically variable properties of human beings. I argue that individual repre- sentational events only become part of a personal-level process by being functionally integrated into a specific form of transparent con- scious self-representation, the ‘epistemic agent model’ (EAM). The EAM may be the true origin of our consciously experienced first- person perspective.

Wednesday, September 06, 2023

Mapping the physical properties of odorant molecules to their perceptual characteristics.

I pass on parts of the editor's summary and the abstract of a foundational piece of work by Lee et al. that produces a map linking odorant molecular structures to their perceptual experience, analogous to the known maps for vision and hearing that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. I also pass on the first few (slightly edited) paragraphs of the paper that set context. Motivated readers can obtain a PDF of the article from me. (This work does not engage the problem, noted by Sagar et al., that the same volatile molecular may smell different to different people - the same odor can smell ‘fruity’ and ‘floral’ to one person and ‘musky’ and ‘decayed’ to another.)  


For vision and hearing, there are well-developed maps that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. The sense of olfaction does not yet have such a map. Using a graph neural network, Lee et al. developed a principal odor map (POM) that faithfully represents known perceptual hierarchies and distances. This map outperforms previously published models to the point that replacing a trained human’s responses with the model output would improve overall panel description. The POM coordinates were able to predict odor intensity and perceptual similarity, even though these perceptual features were not explicitly part of the model training.
Mapping molecular structure to odor perception is a key challenge in olfaction. We used graph neural networks to generate a principal odor map (POM) that preserves perceptual relationships and enables odor quality prediction for previously uncharacterized odorants. The model was as reliable as a human in describing odor quality: On a prospective validation set of 400 out-of-sample odorants, the model-generated odor profile more closely matched the trained panel mean than did the median panelist. By applying simple, interpretable, theoretically rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
Initial paragraphs of text:
A fundamental problem in neuroscience is mapping the physical properties of a stimulus to perceptual characteristics. In vision, wavelength maps to color; in audition, frequency maps to pitch. By contrast, the mapping from chemical structures to olfactory percepts is poorly understood. Detailed and modality-specific maps such as the Commission Internationale de l’Elcairage (CIE) color space (1) and Fourier space (2) led to a better understanding of visual and auditory coding. Similarly, to better understand olfactory coding, the field of olfaction needs a better map.
Pitch increases monotonically with frequency. By contrast, the relationship between odor percept and odorant structure is riddled with discontinuities...frequently structurally similar pairs are not perceptually similar pairs. These discontinuities in the structure-odor relationship suggest that standard chemoinformatic representations of molecules—functional group counts, physical properties, molecular fingerprints, and so on—that have been used in recent odor modeling work are inadequate to map odor space.
To generate odor-relevant representations of molecules, we constructed a message passing neural network (MPNN), which is a specific type of graph neural network (GNN), to map chemical structures to odor percepts. Each molecule was represented as a graph, with each atom described by its valence, degree, hydrogen count, hybridization, formal charge, and atomic number. Each bond was described by its degree, its aromaticity, and whether it is in a ring. Unlike traditional fingerprinting techniques, which assign equal weight to all molecular fragments within a set bond radius, a GNN can optimize fragment weights for odor-specific applications. Neural networks have unlocked predictive modeling breakthroughs in diverse perceptual domains [e.g., natural images, faces, and sounds] and naturally produce intermediate representations of their input data that are functionally high-dimensional, data-driven maps. We used the final layer of the GNN (henceforth, “our model”) to directly predict odor qualities, and the penultimate layer of the model as a principal odor map (POM). The POM (i) faithfully represented known perceptual hierarchies and distances, (ii) extended to out-of-sample (hereafter, “novel”) odorants, (iii) was robust to discontinuities in structure-odor distances, and (iv) generalized to other olfactory tasks.
We curated a reference dataset of ~5000 molecules, each described by multiple odor labels (e.g., creamy, grassy), by combining the Good Scents and Leffingwell & Associates (GS-LF) flavor and fragrance databases. To train our model, we optimized model parameters with a weighted-cross entropy loss over 150 epochs using Adam with a learning rate decaying from 5 × 10−4 to 1 × 10−5 and a batch size of 128. The GS-LF dataset was split 80/20 training/test, and the 80% training set further subdivided into five cross-validation splits. These cross-validation splits were used to optimize hyperparameters using Vizier, a Bayesian optimization algorithm, by tuning across 1000 trials. Details about model architecture and hyperparameters are given in the supplementary methods. When properly hyperparameter-tuned, performance was found to be robust across many model architectures. We present results for the model with the highest mean area under the receiver operating characteristic curve (AUROC) on the cross-validation set (AUROC = 0.89).

Monday, September 04, 2023

Inhalation boosts perceptual awareness and decision speed

From Ludovic Molle et al. (open source):  


Breathing is a ubiquitous biological rhythm in animal life. However, little is known about its effect on consciousness and decision-making. Here, we measured the respiratory rhythm of humans performing a near-threshold discrimination experiment. We show that inhalation, compared with exhalation, improves perceptual awareness and accelerates decision-making while leaving accuracy unaffected.
The emergence of consciousness is one of biology’s biggest mysteries. During the past two decades, a major effort has been made to identify the neural correlates of consciousness, but in comparison, little is known about the physiological mechanisms underlying first-person subjective experience. Attention is considered the gateway of information to consciousness. Recent work suggests that the breathing phase (i.e., inhalation vs. exhalation) modulates attention, in such a way that attention directed toward exteroceptive information would increase during inhalation. One key hypothesis emerging from this work is that inhalation would improve perceptual awareness and near-threshold decision-making. The present study directly tested this hypothesis. We recorded the breathing rhythms of 30 humans performing a near-threshold decision-making task, in which they had to decide whether a liminal Gabor was tilted to the right or the left (objective decision task) and then to rate their perceptual awareness of the Gabor orientation (subjective decision task). In line with our hypothesis, the data revealed that, relative to exhalation, inhalation improves perceptual awareness and speeds up objective decision-making, without impairing accuracy. Overall, the present study builds on timely questions regarding the physiological mechanisms underlying consciousness and shows that breathing shapes the emergence of subjective experience and decision-making.

Friday, September 01, 2023

The fragility of artists’ reputations from 1795 to 2020

Zhang et al. do an interesting study using natural language processing to measure reputation over time:  


This study uses machine-learning techniques and a historical corpus to examine the evolution of artists’ reputations over time. Contrary to popular wisdom, we find that most artists’ reputations peak just before their death, and then start to decline. This decline is strongest for artists who were most popular during their lifetime. We show that artists’ reduced visibility and changes in the public’s aesthetic taste explain much of the posthumous reputation decline. This study highlights how social perception of historical figures can shift and emphasizes the vulnerability of human reputation. Methodologically, the study illustrates an application of natural language processing to measure reputation over time.
This study explores the longevity of artistic reputation. We empirically examine whether artists are more- or less-venerated after their death. We construct a massive historical corpus spanning 1795 to 2020 and build separate word-embedding models for each five-year period to examine how the reputations of over 3,300 famous artists—including painters, architects, composers, musicians, and writers—evolve after their death. We find that most artists gain their highest reputation right before their death, after which it declines, losing nearly one SD every century. This posthumous decline applies to artists in all domains, includes those who died young or unexpectedly, and contradicts the popular view that artists’ reputations endure. Contrary to the Matthew effect, the reputational decline is the steepest for those who had the highest reputations while alive. Two mechanisms—artists’ reduced visibility and the public’s changing taste—are associated with much of the posthumous reputational decline. This study underscores the fragility of human reputation and shows how the collective memory of artists unfolds over time.