Friday, October 29, 2021

People listening to the same story synchronize their heart rates.

Several studies have shown that people paying attention to the same videos or listening to the same stories show similar brain activity, as measured by electroencephalogram (EEG). Electrocardiogram (EKG) measurements are experimentally much easier to perform. Pérez et al. now show that heart rates of participants of their study measured by EKG tended to speed up or slow down at the same points in the story, demonstrating that conscious processing of narrative stimuli synchronizes heart rate between individuals. Here is their abstract:  

Highlights

• Narrative stimuli can synchronize fluctuations of heart rate between individuals 
• This interpersonal synchronization is modulated by attention and predicts memory 
• These effects on heart rate cannot be explained by modulation of respiratory patterns 
• Synchrony is lower in patients with disorders of consciousness
Summary
Heart rate has natural fluctuations that are typically ascribed to autonomic function. Recent evidence suggests that conscious processing can affect the timing of the heartbeat. We hypothesized that heart rate is modulated by conscious processing and therefore dependent on attentional focus. To test this, we leverage the observation that neural processes synchronize between subjects by presenting an identical narrative stimulus. As predicted, we find significant inter-subject correlation of heart rate (ISC-HR) when subjects are presented with an auditory or audiovisual narrative. Consistent with our hypothesis, we find that ISC-HR is reduced when subjects are distracted from the narrative, and higher ISC-HR predicts better recall of the narrative. Finally, patients with disorders of consciousness have lower ISC-HR, as compared to healthy individuals. We conclude that heart rate fluctuations are partially driven by conscious processing, depend on attentional state, and may represent a simple metric to assess conscious state in unresponsive patients.

Wednesday, October 27, 2021

What are our brains doing when they are (supposedly) doing nothing?

Pezullo et al. address the question of this post's title in an article in Trends in Cognitive Sciences: "The secret life of predictive brains: what's spontaneous activity for?" (motivated readers can obtain a PDF of the article by emailing me). They suggest an explanation for why brains are constantly active, displaying sophisticated dynamical patterns of spontaneous activity, even when not engaging in tasks or receiving external sensory stimuli. I pass on the article highlights and summary: 
Spontaneous brain dynamics are manifestations of top-down dynamics of generative models detached from action–perception cycles. 
Generative models constantly produce top-down dynamics, but we call them expectations and attention during task engagement and spontaneous activity at rest. 
Spontaneous brain dynamics during resting periods optimize generative models for future interactions by maximizing the entropy of explanations in the absence of specific data and reducing model complexity. 
Low-frequency brain fluctuations during spontaneous activity reflect transitions between generic priors consisting of low-dimensional representations and connectivity patterns of the most frequent behavioral states. 
High-frequency fluctuations during spontaneous activity in the hippocampus and other regions may support generative replay and model learning.
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between ‘generic priors’ of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.

Monday, October 25, 2021

Scientific fields don't advance if too many papers are published.

Fascinating work from Chu and Evans

Significance

The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas.
Abstract
In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

Friday, October 22, 2021

Metabolism modulates network synchrony in the aging brain

Wow, this work from Weistuch et al. temps me to reconsider my decision to stay away from the various mitochondrial metabolism stimulating supplements I have experimented with over the past 10-15 years (they made me a bit hyper). It has been hypothesized that declining glucose metabolism in older brains drives the loss of high-cost (integrated) functional activities (activities of the sort I'm trying to carry out at the moment in cobbling together a coherent lecture from diverse sources). From the paper's introduction:
We draw on two types of experimental evidence. First, as established using positron emission tomography, older brains show reduced glucose metabolism. Second, as established by functional MRI (fMRI), aging is associated with weakened functional connectivity (FC; i.e., reduced communication [on average] between brain regions). Combining both observations suggests that impaired glucose metabolism may underlie changes in FC. Supporting this link are studies showing disruptions similar to those seen with aging in type 2 diabetic subjects.

The Significance Statement and Abstract:  

Significance

How do brains adapt to changing resource constraints? This is particularly relevant in the aging brain, for which the ability of neurons to utilize their primary energy source, glucose, is diminished. Through experiments and modeling, we find that changes to brain activity patterns with age can be understood in terms of decreasing metabolic activity. Specifically, we find that older brains approach a critical point in our model, enabling small changes in metabolic activity to give rise to an abrupt reconfiguration of functional brain networks.
Abstract
Brain aging is associated with hypometabolism and global changes in functional connectivity. Using functional MRI (fMRI), we show that network synchrony, a collective property of brain activity, decreases with age. Applying quantitative methods from statistical physics, we provide a generative (Ising) model for these changes as a function of the average communication strength between brain regions. We find that older brains are closer to a critical point of this communication strength, in which even small changes in metabolism lead to abrupt changes in network synchrony. Finally, by experimentally modulating metabolic activity in younger adults, we show how metabolism alone—independent of other changes associated with aging—can provide a plausible candidate mechanism for marked reorganization of brain network topology.

Wednesday, October 20, 2021

A debate over stewardship of global collective behavior

In this post I'm going to pass on the abstract of a PNAS perspective piece by Bak-Coleman et al., a critique by Cheong and Jones and a reply to the critique by Bak-Coleman and Bergstrom. First the Bak-Coleman et al. abstract:
Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.
The critique by Cheong and Jones:
In vivid detail, Bak-Coleman et al. describe explosively multiplicative global pathologies of scale posing existential risk to humanity. They argue that the study of collective behavior in the age of digital social media must rise to a “crisis discipline” dedicated to averting global ruin through the adaptive manipulation of social dynamics and the emergent phenomenon of collective behavior. Their proposed remedy is a massive global, multidisciplinary coalition of scientific experts to discover how the “dispersed networks” of digital media can be expertly manipulated through “urgent, evidence-based research” to “steward” social dynamics into “rapid and effective collective behavioral responses,” analogous to “providing regulators with information” to guide the stewardship of ecosystems. They picture the enlightened harnessing of yet-to-be-discovered scale-dependent rules of internet-age social dynamics as a route to fostering the emergent phenomenon of adaptive swarm intelligence.
We wish to issue an urgent warning of our own: Responding to the self-evident fulminant, rampaging pathologies of scale ravaging the planet with yet another pathology of scale will, at best, be ineffective and, at worst, counterproductive. It is the same thing that got us here. The complex international coalition they propose would be like forming a new, ultramodern weather bureau to furnish consensus recommendations to policy makers while a megahurricane is already making landfall. This conjures images of foot dragging, floor fights, and consensus building while looking for actionable “mechanistic insight” into social dynamics on the deck of the Titanic. After lucidly spotlighting the urgent scale-dependent mechanistic nature of the crisis, Bak-Coleman et al. do not propose any immediate measures to reduce scale, but rather offer that there “is reason to be hopeful that well-designed systems can promote healthy collective action at scale...” Hope is neither a strategy nor an action.
Despite lofty goals, the coalition they propose does not match the urgency or promise a rapid and collective behavioral response to the existential threats they identify. Scale reduction may be “collective,” but achieving it will have to be local, authentic, and without delay—that is, a response conforming to the “all hands on deck” swarm intelligence phenomena that are well described in eusocial species already. When faced with the potential for imminent global ruin lurking ominously in the fat tail (5) of the future distribution, the precautionary principle dictates that we should respond with now-or-never urgency. This is a simple fact. A “weather bureau” for social dynamics would certainly be a valuable, if not indispensable, institution for future generations. But there is no reason that scientists around the world, acting as individuals within their own existing social networks and spheres of influence, observing what is already obvious with their own eyes, cannot immediately create a collective chorus to send this message through every digital channel instead of waiting for a green light from above. “Urgency” is euphemistic. It is now or never.
The Bak-Coleman and Bergstrom reply to the critique:
In our PNAS article “Stewardship of global collective behavior”, we describe the breakneck pace of recent innovations in information technology. This radical transformation has transpired not through a stewarded effort to improve information quality or to further human well-being. Rather, current technologies have been developed and deployed largely for the orthogonal purpose of keeping people engaged online. We cannot expect that an information ecology organized around ad sales will promote sustainability, equity, or global health. In the face of such impediments to rational democratic action, how can we hope to overcome threats such as global warming, habitat destruction, mass extinction, war, food security, and pandemic disease? We call for a concerted transdisciplinary response, analogous to other crisis disciplines such as conservation ecology and climate science.
In their letter, Cheong and Jones share our vision of the problem—but they express frustration at the absence of an immediately actionable solution to the enormity of challenges that we describe. They assert “swarm intelligence begins now or never” and advocate local, authentic, and immediate “scale reduction.” It’s an appealing thought: Let us counter pathologies of scale by somehow reversing course.
But it’s not clear what this would entail by way of practical, safe, ethical, and effective intervention. Have there ever been successful, voluntary, large-scale reductions in the scale of any aspect of human social life?
Nor is there reason to believe that an arbitrary, hasty, and heuristically decided large-scale restructuring of our social networks would reduce the long tail of existential risk. Rather, rapid shocks to complex systems are a canonical source of cascading failure. Moving fast and breaking things got us here. We can’t expect it to get us out.
Nor do we share the authors’ optimism about what scientists can accomplish with “a collective chorus … through every digital channel”. It is difficult to envision a louder, more vehement, and more cohesive scientific response than that to the COVID-19 pandemic. Yet this unified call for basic public health measures—grounded in centuries of scientific knowledge—nonetheless failed to mobilize political leadership and popular opinion.
Our views do align when it comes to the “now-or-never urgency” that Cheong and Jones highlight. Indeed, this is a key feature of a crisis discipline: We must act without delay to steer a complex system—while still lacking a complete understanding of how that system operates.
As scholars, our job is to call attention to underappreciated threats and to provide the knowledge base for informed decision-making. Academics do not—and should not—engage in large-scale social engineering. Our grounded view of what science can and should do in a crisis must not be mistaken for lassitude or unconcern. Worldwide, the unprecedented restructuring of human communication is having an enormous impact on issues of social choice, often to our detriment. Our paper is intended to raise the alarm. Providing the definitive solution will be a task for a much broader community of scientists, policy makers, technologists, ethicists, and other voices from around the globe.

Monday, October 18, 2021

Paws for thought: Dogs have a theory of mind?

In a very simple experiment, Schünemann et al. appear to demonstrate that dogs can attribute thoughts and motivations to humans, distinguishing intentional from unintentional actions. From The Guardian summary of the work:
...A researcher was asked...to pass treats to a dog through a gap in a screen. During the process the researcher tested the dog on three conditions: in one they attempted to offer a treat but “accidentally” dropped it on their side of the screen and said “oops!”, in another, they tried to offer a treat but the gap was blocked. In a third, the researcher offered the treat, but then suddenly withdrew it and said: “Ha ha!”...in all three situations they don’t get the food for some reason...The results, based on analysis of video recordings of 51 dogs, reveal that the dogs waited longer before walking around the screen to get the treat directly in the case of the sudden withdrawal of the morsel than for the other two situations. They were also more likely stop wagging their tail and sit or lie down...the dogs clearly show different behaviour between the different conditions, suggesting that they distinguish intentional actions from unintentional behavior.
There is debate over whether this behavior - distinguishing human behaviors based on their intentions rather than some other cue - meets the level of understanding that qualifies as having a 'theory of Mind.'

Friday, October 15, 2021

The dark side of Eureka: Artificially induced Aha moments make facts feel true

Fascinating observations from Laukkonen et al:
Some ideas that we have feel mundane, but others are imbued with a sense of profundity. We propose that Aha! moments make an idea feel more true or valuable in order to aid quick and efficient decision-making, akin to a heuristic. To demonstrate where the heuristic may incur errors, we hypothesized that facts would appear more true if they were artificially accompanied by an Aha! moment elicited using an anagram task. In a preregistered experiment, we found that participants (n = 300) provided higher truth ratings for statements accompanied by solved anagrams even if the facts were false, and the effect was particularly pronounced when participants reported an Aha! experience (d = .629). Recent work suggests that feelings of insight usually accompany correct ideas. However, here we show that feelings of insight can be overgeneralized and bias how true an idea or fact appears, simply if it occurs in the temporal ‘neighbourhood’ of an Aha! moment. We raise the possibility that feelings of insight, epiphanies, and Aha! moments have a dark side, and discuss some circumstances where they may even inspire false beliefs and delusions, with potential clinical importance.

Wednesday, October 13, 2021

National religiosity eases the psychological burden of poverty

From Berkessel et al.:  

Significance

According to a fundamental assumption in the social sciences, the burden of lower socioeconomic status (SES) is more severe in developing nations. In contrast to this assumption, recent research has shown that the burden of lower SES is less—not more—severe in developing nations. In three large-scale global data sets, we show that national religiosity can explain this puzzling finding. Developing nations are more religious, and most world religions uphold norms that, in part, function to ease the burden of lower SES and to cast a bad light on higher SES. In times of declining religiosity, this finding is a call to scientists and policymakers to monitor the increasingly harmful effects of lower SES and its far-reaching social consequences.
Abstract
Lower socioeconomic status (SES) harms psychological well-being, an effect responsible for widespread human suffering. This effect has long been assumed to weaken as nations develop economically. Recent evidence, however, has contradicted this fundamental assumption, finding instead that the psychological burden of lower SES is even greater in developed nations than in developing ones. That evidence has elicited consternation because it suggests that economic development is no cure for the psychological burden of lower SES. So, why is that burden greatest in developed nations? Here, we test whether national religiosity can explain this puzzle. National religiosity is particularly low in developed nations. Consequently, developed nations lack religious norms that may ease the burden of lower SES. Drawing on three different data sets of 1,567,204, 1,493,207, and 274,393 people across 156, 85, and 92 nations, we show that low levels of national religiosity can account for the greater burden of lower SES in developed nations. This finding suggests that, as national religiosity continues to decline, lower SES will become increasingly harmful for well-being—a societal change that is socially consequential and demands political attention.

Monday, October 11, 2021

Precision and the Bayesian brain

I've been studying and trying to understand the new prevailing model of how our brains work that is emerging - the brain as a Baysean predictive processing machine that compares its prior knowledge with incoming evidence of its correctness. If a mis-match occurs that might suggest alterning a prior expectation, the precision of the incoming evidence is very important. In a recent issue of Current Biology Yon and Frith offer a very simple and lucid primer (open source) on what precision is how it influences adrenergic and dopaminergic neuromodulatory systems to alter the synaptic gain afforded to top-down predictions and bottom-up evidence.:
Scientific thinking about the minds of humans and other animals has been transformed by the idea that the brain is Bayesian. A cornerstone of this idea is that agents set the balance between prior knowledge and incoming evidence based on how reliable or ‘precise’ these different sources of information are — lending the most weight to that which is most reliable. This concept of precision has crept into several branches of cognitive science and is a lynchpin of emerging ideas in computational psychiatry — where unusual beliefs or experiences are explained as abnormalities in how the brain estimates precision. But what precisely is precision? In this Primer we explain how precision has found its way into classic and contemporary models of perception, learning, self-awareness, and social interaction. We also chart how ideas around precision are beginning to change in radical ways, meaning we must get more precise about how precision works.

Friday, October 08, 2021

Reconsolidation of a reactivated memory can be altered by stress hormone levels.

Stern's summary in Science Magazine of work by Antypa et al.:
Reactivation of a memory can make it malleable to subsequent change during reconsolidation. Targeted pharmacological and behavioral manipulations after memory reactivation can modulate reconsolidation and modify the memory. Antypa et al. investigated whether changes in stress hormone levels during sleep affected later memory of a reactivated episode. The authors recited a story accompanied by a slide show to a group of male and female subjects. If subjects were given treatment to block cortisol synthesis during early morning sleep, then their 3-day-old memory of the story was more precisely recalled than if the early morning cortisol spike was uncontrolled. However, this improvement only occurred if the subjects had been given a visual cue for the story just before anti-cortisol treatment.

Wednesday, October 06, 2021

Perceived voice emotions evolve from categories to dimensions

Whether emotions are best characterized as discrete catagories (anger, fear, joy, etc.) or as points on continuums of valence (positive/negative) and arousal (calm/agitated) has been debated by emotion researchers for many years. Work from Giordano et al. suggests that both descriptions may be appropriate. They find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic–temporal network (at 240 ms and after 500 ms).
Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic–temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions.

Monday, October 04, 2021

Music can be infectious like a virus - the same mathematical model works for both

Rosati et al. find that a standard model of epidemic disease, called the SIR model, fitts trends in song downloads over time:
Popular songs are often said to be ‘contagious’, ‘infectious’ or ‘viral’. We find that download count time series for many popular songs resemble infectious disease epidemic curves. This paper suggests infectious disease transmission models could help clarify mechanisms that contribute to the ‘spread’ of song preferences and how these mechanisms underlie song popularity. We analysed data from MixRadio, comprising song downloads through Nokia cell phones in Great Britain from 2007 to 2014. We compared the ability of the standard susceptible–infectious–recovered (SIR) epidemic model and a phenomenological (spline) model to fit download time series of popular songs. We fitted these same models to simulated epidemic time series generated by the SIR model. Song downloads are captured better by the SIR model, to the same extent that actual SIR simulations are fitted better by the SIR model than by splines. This suggests that the social processes underlying song popularity are similar to those that drive infectious disease transmission. We draw conclusions about song popularity within specific genres based on estimated SIR parameters. In particular, we argue that faster spread of preferences for Electronica songs may reflect stronger connectivity of the ‘susceptible community’, compared with the larger and broader community that listens to more common genres.

Friday, October 01, 2021

Can we get human nature right?

Iris Berent does an interesting Perspective artice in PNAS that considers the strong intuitions that laypeople hold about human nature. People's attitudes bifurcate for "hot emotions" and "cold ideas." Emotions, people believe, are innate, whereas ideas must be learned. She suggests that the dissonance between intuitive dualism and essentialism explains why emotions and ideas elicit such conflicting reactions. Here is her summary graphic, followed by the article's abstract: 


Few questions in science are as controversial as human nature. At stake is whether our basic concepts and emotions are all learned from experience, or whether some are innate. Here, I demonstrate that reasoning about innateness is biased by the basic workings of the human mind. Psychological science suggests that newborns possess core concepts of “object” and “number.” Laypeople, however, believe that newborns are devoid of such notions but that they can recognize emotions. Moreover, people presume that concepts are learned, whereas emotions (along with sensations and actions) are innate. I trace these beliefs to two tacit psychological principles: intuitive dualism and essentialism. Essentialism guides tacit reasoning about biological inheritance and suggests that innate traits reside in the body; per intuitive dualism, however, the mind seems ethereal, distinct from the body. It thus follows that, in our intuitive psychology, concepts (which people falsely consider as disembodied) must be learned, whereas emotions, sensations, and emotions (which are considered embodied) are likely innate; these predictions are in line with the experimental results. These conclusions do not speak to the question of whether concepts and emotions are innate, but they suggest caution in its scientific evaluation.