Tuesday, November 23, 2021

Socrates, Diderot, and Wolpert on Writing and Printing

I have to pass on these quotes sent by one my Chaos and Complexity Seminar colleagues at the University of Wisconsin:
Socrates on writing, from Phaedrus, 275a-b
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Denis Diderot, Encyclopédie, 1755
"As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes."
Lewis Wolpert (1929--2021) Lewis Wolpert - Scientist - Web of Stories
"Reading rots the mind."

Monday, November 22, 2021

Fluid intelligence and the locus coeruleus-norepinephrine system

Tsukahara and Engle suggest that the cognitive mechanisms of fluid intelligence map onto the locus coeruleus–norepinephrine system. I pass on their introductory paragraph (the link takes you to their abstract, which I think is less informative):
In this article, we outline what we see as a potentially important relationship for understanding the biological basis of intelligence: that is, the relationship between fluid intelligence and the locus coeruleus–norepinephrine system. This is largely motivated by our findings that baseline pupil size is related to fluid intelligence; the larger the pupils, the higher the fluid intelligence. The connection to the locus coeruleus is based on research showing that the size of the pupil can be used as an indicator of locus coeruleus activity. A large body of research on the locus coeruleus–norepinephrine system in animal and human studies has shown how this system is critical for an impressively wide range of behaviors and cognitive processes, from regulating sleep/wake cycles, to sensation and perception, attention, learning and memory, decision making, and more. The locus coeruleus–norepinephrine system achieves this primarily through its widespread projection system throughout the cortex, strong connections with the prefrontal cortex, and the effect of norepinephrine at many levels of brain function. Given the broad role of this system in behavior, cognition, and brain function, we propose that the locus coeruleus–norepinephrine system is essential for understanding the biological basis of intelligence.

Friday, November 19, 2021

Drifting nerve assemblies can maintain persistent memories

A prevailing model has been that a memory in our brains is stored in a specific set of nerve connections, that, like a book in a library, stays where it belongs. Over the past few years, however, it has become more and more clear that 'representational plasticity' may be the norm. A recent article by Kossio et al. proposes a contrasting memory model (motivated readers can obtain the whole article from me):
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Here is an explanatory graphic from the article:
Assembly drift and persistent memory. (A) At two nearby times a similar ensemble of neurons forms the neural representation of, for example, “apple” (compare the blue-colored assembly neurons at the first and the second time point). At distant times the representation consists of completely different ensembles (blue-colored assembly neurons at the first and the third time point). Due to their gradual change, temporally distant representations are indirectly related via ensembles in the time period between them. (B) Parts of a thread possess the same form of indirect relation: Nearby parts are composed of similar ensembles of fibers, while distant ones consist of different ensembles, which are connected by those in between. (C) The complete change of memory representations still allows for stable behavior. In the schematic, a tasty apple is perceived. At different times, this triggers different ensembles that presently form the representation of “apple”; see A. Assembly activation initiates a reaching movement toward the apple, despite the dissimilarity of the activated neuron ensembles. Memory and behavior are conserved because the gradual change of assembly neurons enables the inputs (green) and outputs (orange) to track the neural representation.

Wednesday, November 17, 2021

Our brainstems respond to fake therapies and fake side effects.

Here is the abstract from a Journal of Neuroscience paper by Crawford et al. titled "Brainstem mechanisms of pain modulation: a within-subjects 7T fMRI study of Placebo Analgesic and Nocebo Hyperalgesic Responses":
Pain perception can be powerfully influenced by an individual’s expectations and beliefs. Whilst the cortical circuitry responsible for pain modulation has been thoroughly investigated, the brainstem pathways involved in the modulatory phenomena of placebo analgesia and nocebo hyperalgesia remain to be directly addressed. This study employed ultra-high field 7 Tesla functional MRI (fMRI) to accurately resolve differences in brainstem circuitry present during the generation of placebo analgesia and nocebo hyperalgesia in healthy human participants (N = 25; 12 Male). Over two successive days, through blinded application of altered thermal stimuli, participants were deceptively conditioned to believe that two inert creams labelled ‘lidocaine’ (placebo) and ‘capsaicin’ (nocebo) were acting to modulate their pain relative to a third ‘Vaseline’ (control) cream. In a subsequent test phase, fMRI image sets were collected whilst participants were given identical noxious stimuli to all three cream sites. Pain intensity ratings were collected and placebo and nocebo responses determined. Brainstem-specific fMRI analysis revealed altered activity in key pain-modulatory nuclei, including a disparate recruitment of the periaqueductal gray (PAG) – rostral ventromedial medulla (RVM) pathway when both greater placebo and nocebo effects were observed. Additionally, we found that placebo and nocebo responses differentially activated the parabrachial nucleus but overlapped in their engagement of the substantia nigra and locus coeruleus. These data reveal that placebo and nocebo effects are generated through differential engagement of the PAG-RVM pathway, which in concert with other brainstem sites likely influence the experience of pain by modulating activity at the level of the dorsal horn.

Snippets of Bach

To give MindBlog readers a bit of a break from brain and mind posts, I want to point out that the New York Times has a great series of articles that present roughly five minutes of music chosen by artists and composers to make you fall in love with different genres of classical music: piano, opera, cello, Mozart, 21st-century composers, violin, Baroque music, sopranos, Beethoven, flute, string quartets, tenors, Brahms, choral music, percussion, symphonies, Stravinsky, trumpet and Maria Callas. 

 

The most recent installment presents the stirring, consoling music of Johann Sebastian Bach, the grand master of the Western classical tradition.

Monday, November 15, 2021

Coevolution of tool use and language - shared syntactic processes and basal ganglia substrates

Thibault et al. show that tool use and language share syntactic processes. Functional magnetic resonance imaging reveals that tool use and syntax in language elicit similar patterns of brain activation within the basal ganglia. This indicates common neural resources for the two abilities. Indeed, learning transfer occurs across the two domains: Tool-use motor training improves syntactic processing in language and, reciprocally, linguistic training with syntactic structures improves tool use. Here is their entire structured abstract:   

INTRODUCTION

Tool use is a hallmark of human evolution. Beyond its sensorimotor components, the complexity of which has been extensively investigated, tool use affects cognition from a different perspective. Indeed, tool use requires integrating an external object as a body part and embedding its functional structure in the motor program. This adds a hierarchical level into the motor plan of manual actions, subtly modifying the relationship between interdependent subcomponents. Embedded structures also exist in language, and syntax is the cognitive function handling these linguistic hierarchies. One example is center-embedded object-relative clauses: “The poet [that the scientist admires] reads the paper.” Accordingly, researchers have advanced a role for syntax in action and the existence of similarities between the processes underlying tool use and language, so that shared neural resources for a common cognitive function could be at stake.
RATIONALE
We first tested the existence of shared neural substrates for tool use and syntax in language. Second, we tested the prediction that training one ability should affect performance in the other. In a first experiment, we measured participants’ brain activity with functional magnetic resonance imaging during tool use or, as a control, manual actions. In separate runs, the same participants performed a linguistic task on complex syntactic structures. We looked for common activations between tool use and the linguistic task, predicting similar patterns of activity if they rely on common neural resources. In further behavioral experiments, we tested whether motor training with the tool selectively improves syntactic performance in language and if syntactic training in language, in turn, selectively improves motor performance with the tool.
RESULTS
Tool-use planning and complex syntax processing (i.e., object relatives) elicited neural activity anatomically colocalized within the basal ganglia. A control experiment ruled out verbal working memory and manual (i.e., without a tool) control processes as an underlying component of this overlap. Multivariate analyses revealed similar spatial distributions of neural patterns prompted by tool-use planning and object-relative processing. This agrees with the recruitment of the same neural resources by both abilities and with the existence of a supramodal syntactic function. The shared neurofunctional resources were moreover reflected behaviorally by cross-domain learning transfer. Indeed, tool-use training significantly improved linguistic performance with complex syntactic structures. No learning transfer was observed on language syntactic abilities if participants trained without the tool. The reverse was also true: Syntactic training with complex sentences improved motor performance with the tool more than motor performance in a task without the tool and matched for sensorimotor difficulty. No learning transfer was observed on tool use if participants trained with simpler syntactic structures in language.
CONCLUSION
These findings reveal the existence of a supramodal syntactic function that is shared between language and motor processes. As a consequence, training tool-use abilities improves linguistic syntax and, reciprocally, training linguistic syntax abilities improves tool use. The neural mechanisms allowing for boosting performance in one domain by training syntax in the other may involve priming processes through preactivation of common neural resources, as well as short-term plasticity within the shared network. Our findings point to the basal ganglia as the neural site of supramodal syntax that handles embedded structures in either domain and also support longstanding theories of the coevolution of tool use and language in humans.

Friday, November 12, 2021

Freedom From Illusion

A friend who attended the lecture I gave last Sunday (A New Vision of how our Minds Work), and mentioned in a Monday post, sent me an article from The Buddhist Review "TRICYCLE" by Pema Düddul titled "Freedom From Illusion". If you scan both texts, I suspect you will find, as I do, a striking consonance between the neuroscientific and Buddhist perspectives on "Illusion." 

From the beginning of the Düddul article:

A shooting star, a clouding of the sight, 
a lamp, an illusion, a drop of dew, a bubble, 
a dream, a lightning’s flash, a thunder cloud: 
this is the way one should see the conditioned.
This revered verse from the Diamond Sutra points to one of Buddhism’s most profound yet confounding truths—the illusory nature of all things. The verse is designed to awaken us to ultimate reality, specifically to the fact that all things, especially thoughts and feelings, are the rainbow-like display of the mind. One of the Tibetan words for the dualistic mind means something like “a magician creating illusions.” As my teacher Ngakpa Karma Lhundup Rinpoche explained: “All of our thoughts are magical illusions created by our mind. We get trapped, carried away by our own illusions. We forget that we are the magician in the first place!”
Compare this with my talk's description of predictive processing, and how what we see, hear, touch, taste, and smell are largely simulations or illusions about the world. Here is a summary sentence in one of my slides, taken from a lecture by Ruben Laukkonen, in which I replace his last word, 'fantasies,' with the word 'illusions.'
Everything we do and experience is in service of reducing surprises by fulfilling illusions.

Wednesday, November 10, 2021

Computational evidence that predictive processing shapes language comprehension mechanisms in the brain.

Having just posted a lecture on predictive processing that I gave two days ago, I come across this fascinating work from Schrimpf et al.:  

Significance

Language is a quintessentially human ability. Research has long probed the functional architecture of language in the mind and brain using diverse neuroimaging, behavioral, and computational modeling approaches. However, adequate neurally-mechanistic accounts of how meaning might be extracted from language are sorely lacking. Here, we report a first step toward addressing this gap by connecting recent artificial neural networks from machine learning to human recordings during language processing. We find that the most powerful models predict neural and behavioral responses across different datasets up to noise levels. Models that perform better at predicting the next word in a sequence also better predict brain measurements—providing computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the brain.
Abstract
The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species’ signature cognitive skill. We find that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models’ neural fits (“brain score”) and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.

Monday, November 08, 2021

A MindBlog lecture - A New Vision of how our Minds Work

Yesterday I gave a short talk to the Austin Rainbow Forum discussion group that I started up with several other members of the Austin Prime Timers in January 2018. As I promised at the outset of the talk, which was fairly intense, I am now putting a PDF of the lecture text and slides on my website, and here am passing it on to MindBlog readers who might be interested in having a look. Here is the second slide of the talk, listing its topics:

 


It’s Quitting Season

I want to pass on two articles with the similar themes of people taking stock of their lives and deciding to stop making themselves unhappy. The piece by Crouse and Ferguson is a video, by and directed towards, Millenials, with the following introductory text:
It’s been a brutal few years. But we’ve gritted through. We’ve spent time languishing. We’ve had one giant national burnout. And now, finally, we’re quitting...We are quitting our jobs. Our cities. Our marriages. Even our Twitter feeds...And as we argue in the video, we’re not quitting because we’re weak. We’re quitting because we’re smart...younger Americans like 18-year-old singer Olivia Rodrigo and the extraordinary Simone Biles are barely old enough to rent a car but they are already teaching us about boundaries. They’ve seen enough hollowed-out millennials to know what the rest of us are learning: Don’t be a martyr to grit.
I feel some personal resonance with points made about a whole career path in the piece by Arthur Brooks, To Be Happy, Hide From the Spotlight, because this clip nails a part of the reason I keep driving myself to performances (writing, lecturing, music) by rote habit:
Assuming that you aren’t a pop star or the president, fame might seem like an abstract problem. The thing is, fame is relative, and its cousin, prestige — fame among a particular group of people — is just as fervently chased in smaller communities and fields of expertise. In my own community of academia, honors and prestige can be highly esoteric but deeply desired.
I suggest you read the whole article, but here are a few further clips:
Even if a person’s motive for fame is to set a positive example, it mirrors the other, less flattering motives insofar as it depends on other people’s opinions. And therein lies the happiness problem. Thomas Aquinas wrote in the 13th century, “Happiness is in the happy. But honor is not in the honored.” ...research shows that fame ...based on what scholars call extrinsic rewards... brings less happiness than intrinsic rewards...fame has become a form of addiction. This is especially true in the era of social media, which allows almost anyone with enough motivation to achieve recognition by some number of strangers...this is not a new phenomenon. The 19th-century philosopher Arthur Schopenhauer said fame is like seawater: “The more we have, the thirstier we become.”
No social scientists I am aware of have created a quantitative misery index of fame. But the weight of the indirect evidence above, along with the testimonies of those who have tasted true fame in their time, should be enough to show us that it is poisonous. It is “like a river, that beareth up things light and swollen,” said Francis Bacon, “and drowns things weighty and solid.” Or take it from Lady Gaga: “Fame is prison.”
...Pay attention to when you are seeking fame, prestige, envy, or admiration—especially from strangers. Before you post on social media, for example, ask yourself what you hope to achieve with it...Say you want to share a bit of professional puffery or photos of your excellent beach body. The benefit you experience is probably the little hit of dopamine you will get as you fire it off while imagining the admiration or envy others experience as they see it. The cost is in the reality of how people will actually see your post (and you): Research shows that people will largely find your boasting to be annoying—even if you disguise it with a humblebrag—and thus admire you less, not more. As Shakespeare helpfully put it, “Who knows himself a braggart, / Let him fear this, for it will come to pass / that every braggart shall be found an ass.”
The poet Emily Dickinson called fame a “fickle food / Upon a shifting plate.” But far from a harmless meal, “Men eat of it and die.” It’s a good metaphor, because we have the urge to consume all kinds of things that appeal to some anachronistic neurochemical impulse but that nevertheless will harm us. In many cases—tobacco, drugs of abuse, and, to some extent, unhealthy foods—we as a society have recognized these tendencies and taken steps to combat them by educating others about their ill effects.
Why have we failed to do so with fame? None of us, nor our children, will ever find fulfillment through the judgment of strangers. The right rule of thumb is to treat fame like a dangerous drug: Never seek it for its own sake, teach your kids to avoid it, and shun those who offer it.

Friday, November 05, 2021

Variability, not stereotypical expressions, in facial portraying of emotional states.

Barrett and collaborators use a novel method to offer more evidence against reliable mapping between certain emotional states and facial muscle movements:
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
This perspective is opposite to that expressed by Cowen, Keltner et al. who use another novel method to reach opposite conclusions, in work that was noted in MindBlog's 12/29/20 post, along with some reservations about their conclusions.

Wednesday, November 03, 2021

People mistake the internet’s knowledge for their own

Fascinating experiments from Adrian Ward:   

Significance

In the current digital age, people are constantly connected to online information. The present research provides evidence that on-demand access to external information, enabled by the internet and search engines like Google, blurs the boundaries between internal and external knowledge, causing people to believe they could—or did—remember what they actually just found. Using Google to answer general knowledge questions artificially inflates peoples’ confidence in their own ability to remember and process information and leads to erroneously optimistic predictions regarding how much they will know without the internet. When information is at our fingertips, we may mistakenly believe that it originated from inside our heads.
Abstract
People frequently search the internet for information. Eight experiments (n = 1,917) provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the internet. Relative to those using only their own knowledge, people who use Google to answer general knowledge questions are not only more confident in their ability to access external information; they are also more confident in their own ability to think and remember. Moreover, those who use Google predict that they will know more in the future without the help of the internet, an erroneous belief that both indicates misattribution of prior knowledge and highlights a practically important consequence of this misattribution: overconfidence when the internet is no longer available. Although humans have long relied on external knowledge, the misattribution of online knowledge to the self may be facilitated by the swift and seamless interface between internal thought and external information that characterizes online search. Online search is often faster than internal memory search, preventing people from fully recognizing the limitations of their own knowledge. The internet delivers information seamlessly, dovetailing with internal cognitive processes and offering minimal physical cues that might draw attention to its contributions. As a result, people may lose sight of where their own knowledge ends and where the internet’s knowledge begins. Thinking with Google may cause people to mistake the internet’s knowledge for their own.

Monday, November 01, 2021

What the mind is - similarities and differences in concepts of mental life in five cultures

From Weisman et al., who do a fascinating study of cognitive structures 'from the bottom up', allowing data to give rise to ontological structures, rather than working 'from the top down' by using a theory to guide hypothesis-driven data collection. :
How do concepts of mental life vary across cultures? By asking simple questions about humans, animals and other entities – for example, ‘Do beetles get hungry? Remember things? Feel love?’ – we reconstructed concepts of mental life from the bottom up among adults (N = 711) and children (ages 6–12 years, N = 693) in the USA, Ghana, Thailand, China and Vanuatu. This revealed a cross-cultural and developmental continuity: in all sites, among both adults and children, cognitive abilities travelled separately from bodily sensations, suggesting that a mind–body distinction is common across diverse cultures and present by middle childhood. Yet there were substantial cultural and developmental differences in the status of social–emotional abilities – as part of the body, part of the mind or a third category unto themselves. Such differences may have far-reaching social consequences, whereas the similarities identify aspects of human understanding that may be universal.

Friday, October 29, 2021

People listening to the same story synchronize their heart rates.

Several studies have shown that people paying attention to the same videos or listening to the same stories show similar brain activity, as measured by electroencephalogram (EEG). Electrocardiogram (EKG) measurements are experimentally much easier to perform. Pérez et al. now show that heart rates of participants of their study measured by EKG tended to speed up or slow down at the same points in the story, demonstrating that conscious processing of narrative stimuli synchronizes heart rate between individuals. Here is their abstract:  

Highlights

• Narrative stimuli can synchronize fluctuations of heart rate between individuals 
• This interpersonal synchronization is modulated by attention and predicts memory 
• These effects on heart rate cannot be explained by modulation of respiratory patterns 
• Synchrony is lower in patients with disorders of consciousness
Summary
Heart rate has natural fluctuations that are typically ascribed to autonomic function. Recent evidence suggests that conscious processing can affect the timing of the heartbeat. We hypothesized that heart rate is modulated by conscious processing and therefore dependent on attentional focus. To test this, we leverage the observation that neural processes synchronize between subjects by presenting an identical narrative stimulus. As predicted, we find significant inter-subject correlation of heart rate (ISC-HR) when subjects are presented with an auditory or audiovisual narrative. Consistent with our hypothesis, we find that ISC-HR is reduced when subjects are distracted from the narrative, and higher ISC-HR predicts better recall of the narrative. Finally, patients with disorders of consciousness have lower ISC-HR, as compared to healthy individuals. We conclude that heart rate fluctuations are partially driven by conscious processing, depend on attentional state, and may represent a simple metric to assess conscious state in unresponsive patients.

Wednesday, October 27, 2021

What are our brains doing when they are (supposedly) doing nothing?

Pezullo et al. address the question of this post's title in an article in Trends in Cognitive Sciences: "The secret life of predictive brains: what's spontaneous activity for?" (motivated readers can obtain a PDF of the article by emailing me). They suggest an explanation for why brains are constantly active, displaying sophisticated dynamical patterns of spontaneous activity, even when not engaging in tasks or receiving external sensory stimuli. I pass on the article highlights and summary: 
Spontaneous brain dynamics are manifestations of top-down dynamics of generative models detached from action–perception cycles. 
Generative models constantly produce top-down dynamics, but we call them expectations and attention during task engagement and spontaneous activity at rest. 
Spontaneous brain dynamics during resting periods optimize generative models for future interactions by maximizing the entropy of explanations in the absence of specific data and reducing model complexity. 
Low-frequency brain fluctuations during spontaneous activity reflect transitions between generic priors consisting of low-dimensional representations and connectivity patterns of the most frequent behavioral states. 
High-frequency fluctuations during spontaneous activity in the hippocampus and other regions may support generative replay and model learning.
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between ‘generic priors’ of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.

Monday, October 25, 2021

Scientific fields don't advance if too many papers are published.

Fascinating work from Chu and Evans

Significance

The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas.
Abstract
In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

Friday, October 22, 2021

Metabolism modulates network synchrony in the aging brain

Wow, this work from Weistuch et al. temps me to reconsider my decision to stay away from the various mitochondrial metabolism stimulating supplements I have experimented with over the past 10-15 years (they made me a bit hyper). It has been hypothesized that declining glucose metabolism in older brains drives the loss of high-cost (integrated) functional activities (activities of the sort I'm trying to carry out at the moment in cobbling together a coherent lecture from diverse sources). From the paper's introduction:
We draw on two types of experimental evidence. First, as established using positron emission tomography, older brains show reduced glucose metabolism. Second, as established by functional MRI (fMRI), aging is associated with weakened functional connectivity (FC; i.e., reduced communication [on average] between brain regions). Combining both observations suggests that impaired glucose metabolism may underlie changes in FC. Supporting this link are studies showing disruptions similar to those seen with aging in type 2 diabetic subjects.

The Significance Statement and Abstract:  

Significance

How do brains adapt to changing resource constraints? This is particularly relevant in the aging brain, for which the ability of neurons to utilize their primary energy source, glucose, is diminished. Through experiments and modeling, we find that changes to brain activity patterns with age can be understood in terms of decreasing metabolic activity. Specifically, we find that older brains approach a critical point in our model, enabling small changes in metabolic activity to give rise to an abrupt reconfiguration of functional brain networks.
Abstract
Brain aging is associated with hypometabolism and global changes in functional connectivity. Using functional MRI (fMRI), we show that network synchrony, a collective property of brain activity, decreases with age. Applying quantitative methods from statistical physics, we provide a generative (Ising) model for these changes as a function of the average communication strength between brain regions. We find that older brains are closer to a critical point of this communication strength, in which even small changes in metabolism lead to abrupt changes in network synchrony. Finally, by experimentally modulating metabolic activity in younger adults, we show how metabolism alone—independent of other changes associated with aging—can provide a plausible candidate mechanism for marked reorganization of brain network topology.

Wednesday, October 20, 2021

A debate over stewardship of global collective behavior

In this post I'm going to pass on the abstract of a PNAS perspective piece by Bak-Coleman et al., a critique by Cheong and Jones and a reply to the critique by Bak-Coleman and Bergstrom. First the Bak-Coleman et al. abstract:
Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.
The critique by Cheong and Jones:
In vivid detail, Bak-Coleman et al. describe explosively multiplicative global pathologies of scale posing existential risk to humanity. They argue that the study of collective behavior in the age of digital social media must rise to a “crisis discipline” dedicated to averting global ruin through the adaptive manipulation of social dynamics and the emergent phenomenon of collective behavior. Their proposed remedy is a massive global, multidisciplinary coalition of scientific experts to discover how the “dispersed networks” of digital media can be expertly manipulated through “urgent, evidence-based research” to “steward” social dynamics into “rapid and effective collective behavioral responses,” analogous to “providing regulators with information” to guide the stewardship of ecosystems. They picture the enlightened harnessing of yet-to-be-discovered scale-dependent rules of internet-age social dynamics as a route to fostering the emergent phenomenon of adaptive swarm intelligence.
We wish to issue an urgent warning of our own: Responding to the self-evident fulminant, rampaging pathologies of scale ravaging the planet with yet another pathology of scale will, at best, be ineffective and, at worst, counterproductive. It is the same thing that got us here. The complex international coalition they propose would be like forming a new, ultramodern weather bureau to furnish consensus recommendations to policy makers while a megahurricane is already making landfall. This conjures images of foot dragging, floor fights, and consensus building while looking for actionable “mechanistic insight” into social dynamics on the deck of the Titanic. After lucidly spotlighting the urgent scale-dependent mechanistic nature of the crisis, Bak-Coleman et al. do not propose any immediate measures to reduce scale, but rather offer that there “is reason to be hopeful that well-designed systems can promote healthy collective action at scale...” Hope is neither a strategy nor an action.
Despite lofty goals, the coalition they propose does not match the urgency or promise a rapid and collective behavioral response to the existential threats they identify. Scale reduction may be “collective,” but achieving it will have to be local, authentic, and without delay—that is, a response conforming to the “all hands on deck” swarm intelligence phenomena that are well described in eusocial species already. When faced with the potential for imminent global ruin lurking ominously in the fat tail (5) of the future distribution, the precautionary principle dictates that we should respond with now-or-never urgency. This is a simple fact. A “weather bureau” for social dynamics would certainly be a valuable, if not indispensable, institution for future generations. But there is no reason that scientists around the world, acting as individuals within their own existing social networks and spheres of influence, observing what is already obvious with their own eyes, cannot immediately create a collective chorus to send this message through every digital channel instead of waiting for a green light from above. “Urgency” is euphemistic. It is now or never.
The Bak-Coleman and Bergstrom reply to the critique:
In our PNAS article “Stewardship of global collective behavior”, we describe the breakneck pace of recent innovations in information technology. This radical transformation has transpired not through a stewarded effort to improve information quality or to further human well-being. Rather, current technologies have been developed and deployed largely for the orthogonal purpose of keeping people engaged online. We cannot expect that an information ecology organized around ad sales will promote sustainability, equity, or global health. In the face of such impediments to rational democratic action, how can we hope to overcome threats such as global warming, habitat destruction, mass extinction, war, food security, and pandemic disease? We call for a concerted transdisciplinary response, analogous to other crisis disciplines such as conservation ecology and climate science.
In their letter, Cheong and Jones share our vision of the problem—but they express frustration at the absence of an immediately actionable solution to the enormity of challenges that we describe. They assert “swarm intelligence begins now or never” and advocate local, authentic, and immediate “scale reduction.” It’s an appealing thought: Let us counter pathologies of scale by somehow reversing course.
But it’s not clear what this would entail by way of practical, safe, ethical, and effective intervention. Have there ever been successful, voluntary, large-scale reductions in the scale of any aspect of human social life?
Nor is there reason to believe that an arbitrary, hasty, and heuristically decided large-scale restructuring of our social networks would reduce the long tail of existential risk. Rather, rapid shocks to complex systems are a canonical source of cascading failure. Moving fast and breaking things got us here. We can’t expect it to get us out.
Nor do we share the authors’ optimism about what scientists can accomplish with “a collective chorus … through every digital channel”. It is difficult to envision a louder, more vehement, and more cohesive scientific response than that to the COVID-19 pandemic. Yet this unified call for basic public health measures—grounded in centuries of scientific knowledge—nonetheless failed to mobilize political leadership and popular opinion.
Our views do align when it comes to the “now-or-never urgency” that Cheong and Jones highlight. Indeed, this is a key feature of a crisis discipline: We must act without delay to steer a complex system—while still lacking a complete understanding of how that system operates.
As scholars, our job is to call attention to underappreciated threats and to provide the knowledge base for informed decision-making. Academics do not—and should not—engage in large-scale social engineering. Our grounded view of what science can and should do in a crisis must not be mistaken for lassitude or unconcern. Worldwide, the unprecedented restructuring of human communication is having an enormous impact on issues of social choice, often to our detriment. Our paper is intended to raise the alarm. Providing the definitive solution will be a task for a much broader community of scientists, policy makers, technologists, ethicists, and other voices from around the globe.

Monday, October 18, 2021

Paws for thought: Dogs have a theory of mind?

In a very simple experiment, Schünemann et al. appear to demonstrate that dogs can attribute thoughts and motivations to humans, distinguishing intentional from unintentional actions. From The Guardian summary of the work:
...A researcher was asked...to pass treats to a dog through a gap in a screen. During the process the researcher tested the dog on three conditions: in one they attempted to offer a treat but “accidentally” dropped it on their side of the screen and said “oops!”, in another, they tried to offer a treat but the gap was blocked. In a third, the researcher offered the treat, but then suddenly withdrew it and said: “Ha ha!”...in all three situations they don’t get the food for some reason...The results, based on analysis of video recordings of 51 dogs, reveal that the dogs waited longer before walking around the screen to get the treat directly in the case of the sudden withdrawal of the morsel than for the other two situations. They were also more likely stop wagging their tail and sit or lie down...the dogs clearly show different behaviour between the different conditions, suggesting that they distinguish intentional actions from unintentional behavior.
There is debate over whether this behavior - distinguishing human behaviors based on their intentions rather than some other cue - meets the level of understanding that qualifies as having a 'theory of Mind.'

Friday, October 15, 2021

The dark side of Eureka: Artificially induced Aha moments make facts feel true

Fascinating observations from Laukkonen et al:
Some ideas that we have feel mundane, but others are imbued with a sense of profundity. We propose that Aha! moments make an idea feel more true or valuable in order to aid quick and efficient decision-making, akin to a heuristic. To demonstrate where the heuristic may incur errors, we hypothesized that facts would appear more true if they were artificially accompanied by an Aha! moment elicited using an anagram task. In a preregistered experiment, we found that participants (n = 300) provided higher truth ratings for statements accompanied by solved anagrams even if the facts were false, and the effect was particularly pronounced when participants reported an Aha! experience (d = .629). Recent work suggests that feelings of insight usually accompany correct ideas. However, here we show that feelings of insight can be overgeneralized and bias how true an idea or fact appears, simply if it occurs in the temporal ‘neighbourhood’ of an Aha! moment. We raise the possibility that feelings of insight, epiphanies, and Aha! moments have a dark side, and discuss some circumstances where they may even inspire false beliefs and delusions, with potential clinical importance.