I want to suggest that readers have a look at an engaging popular article by Darren Orf that summarizes the ideas of Anil Seth. Seth is a neuroscientist at the University of Sussex whose writing was on of the sources I used in preparing my most recent lecture, New Perspectives on how our Minds Work. On the 'singularity' or point at which the intelligence of artificial minds might surpass that of human minds, Seth makes the simple point that intelligence is not the same thing as consciousness, which depends on our biological bodies (something AI simply doesn't have) - bodies that use a bunch of controlled hallucinations to run our show.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Wednesday, May 14, 2025
Our human consciousness is a 'Controlled Hallucination' and AI can never achieve it.
Monday, March 03, 2025
The unbearable slowness of being: Why do we live at 10 bits/s?
A brief review by Rachel Nuwer in the Scientific American points to an article by Zheng and Meister with the title of this post (motivated readers can obtain a PDF of the article from me).
I pass on an initial clip from Nuwer's review and then the abstract of the Zheng and Meister paper:
PEOPLE OFTEN FEEL that their inner thoughts and feelings are much richer than what they are capable of expressing in real time. Entrepreneur Elon Musk is so bothered by what he calls this “bandwidth problem,” in fact, that one of his long-term goals is to create an interface that lets the human brain communicate directly with a computer, unencumbered by the slow speed of speaking or writing.
If Musk succeeded, he would probably be disappointed. According to recent research by Zheng and Meister published in Neuron, human beings remember, make decisions and imagine things at a fixed, excruciatingly slow speed of about 10 bits per second. In contrast, human sensory systems gather data at about one billion bits per second.
This biological paradox, highlighted in the new study, probably contributes to the false feeling that our mind can engage in seemingly infinite thoughts simultaneously—a phenomenon the researchers deem “the Musk illusion.”
The Zheng and Meister astract:
This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s . In comparison, our sensory systems gather data at ∼109 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the “outer” brain handles fast high-dimensional sensory and motor signals, whereas the “inner” brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Friday, February 21, 2025
How complex brains and cognition first arose
I have received a draft of an upcoming paper in Behavioral and Brain Sciences by Coombs and Trestman titled "A Multi-Trait Embodied Framework for the Evolution of Brains and Cognition across Animal Phyla " It has a nice graphic indicating different brain regions whose functionalities are common to humans and phylogenetically different animals with complex brains (crows, octopuses and honeybees). Motivated readers can obtain a PDF of the article from me. Here is the abstract :
Among non-human animals, crows, octopuses and honeybees are well-known for their complex brains and cognitive abilities. Widening the lens from the idiosyncratic abilities of exemplars like these to those of animals across the phylogenetic spectrum begins to reveal the ancient evolutionary process by which complex brains and cognition first arose in different lineages. The distribution of 35 phenotypic traits in 17 metazoan lineages reveals that brain and cognitive complexity in only three lineages (vertebrates, cephalopod mollusks, and euarthropods) can be attributed to the pivotal role played by body, sensory, brain and motor traits in active visual sensing and visuomotor skills. Together, these pivotal traits enabled animals to transition from largely reactive to more proactive behaviors, and from slow and two-dimensional motion to more rapid and complex three-dimensional motion. Among pivotal traits, high-resolution eyes and laminated visual regions of the brain stand out because they increased the processing demands on and the computational power of the brain by several orders of magnitude. The independent acquisition of pivotal traits in cognitively complex (CC) lineages can be explained as the completion of several multi-trait transitions over the course of evolutionary history, each resulting in an increasing level of complexity that arises from a distinct combination of traits. Whereas combined pivotal traits represent the highest level of complexity in CC lineages, combined traits at lower levels characterize many non-CC lineages, suggesting that certain body, sensory and brain traits may have been linked (the trait-linkage hypothesis) during the evolution of both CC and non-CC lineages.
Wednesday, January 29, 2025
Everything we experience comes from inside
I pass on this cryptic paragraph from my personal journal:
Everything we experience is coming from inside us - our illusion of having a self, our sense of agency. This includes attributing causal agency to others, as in 'they are making me do this,’ This 'error' allows individuals in a kinship group or tribe who share the same error to form an imaginary 'we' hive mind supporting unified action. The higher metacognitive stance is to participate when necessary in the group illusion while maintaining clarity on the fact that the actual agency is within oneself.
Wednesday, January 08, 2025
Predictive coding for social perception
Perception cannot rely solely on bottom-up processes, whereby patterns of receptor stimulation are passed up the hierarchy to generate a corresponding awareness. Such bottom-up processes would always generate experiences that are out-of-date and saturated by noise. Predictive processes are thought to play a key role in overcoming these problems, allowing us to generate best guesses concerning the likely sensorium, and highlighting quickly when the world is not as we expect. Action provides a crucial predictive source and a mechanism for us to resolve uncertainty and surprise, but further complicates our understanding due to further predictive cues and continuous change of sensory input. Another agent who can also change the world and who we seek to understand adds another layer of complexity yet. How can we understand the predictive mechanisms supporting social interaction and understanding, with such a multitude of moving and interacting components? In this special issue, Keysers et al. (2024) outline how predictive coding can be applied to understanding the actions and emotions of others, with Mayo and Shamay-Tsoory (2024) discussing how these mutual predictions might shape social learning. They suggest that such social learning might be supported by interbrain synchronization and Antonelli et al. (2024) discuss the critical role of emotion in shaping these multibrain dynamics.While it is clearly crucial that we understand the nature of the mechanisms underlying social interactions, we wish to highlight the challenges of this complexity for scientific progress. Particularly, how to find ways to properly test, refute, and improve our models, when the assumed supporting mechanisms are so complex.How predictions shape neural processing is thought to differ across space and time, even for processing of the simplest (non-social; static) elements of our environment. Keysers et al. (2024) highlight the assumed neural interactions across cortical layers, such that predictions are passed down the hierarchy to hypothesis units in deep (and perhaps superficial) cortical layers, input arrives in middle layers, and error signals are calculated and represented in superficial layers. This idea is supported by recent 7 T MRI work from our lab demonstrating increased decoding of predicted Gabor orientations in deep layers of primary visual cortex, with an advantage for unpredicted orientations in superficial layers (Thomas et al., 2024). Recent evidence suggests opposing influences at the temporal level as well (McDermott et al., 2024). This electroencephalography (EEG) study found that early perceptual processing is biased towards what we expect (< 200 ms; optimizing veridicality) with the advantage flipping in later timeranges (> 200 ms; optimizing informativeness – in line with the opposing process account proposed in Press et al., 2020). Building testable mechanistic accounts of these interactions across time and space – even for the simple perception of deterministic sequences of Gabor patches – represents a continued puzzle for future work.In the social domain, the stimuli are by their nature highly complex and dynamic (Keysers et al., 2024). Therefore, these above interactions across space and time must be continuously updated. Despite this complexity, there is some evidence cited by Keysers et al. (2024) inline with the above laminar conclusions in simpler environments. Specifically, there is increased deep-layer information about observed actions in parietal cortex when presented in a predictable order, mediated via feedback connections (from premotor cortex). Social domains also yield multiple sources of prediction about the self and other (Mayo and Shamay-Tsoory, 2024) and we must determine how we weight the precision, or reliability, of these different sources, as well as how we render information about the self and other separable. Is this achieved by different cell populations coding information about the self and other (Mayo and Shamay-Tsoory, 2024)? Or could mechanisms similar to those proposed to distinguish products of imagination from reality (similarly internal vs external sources), also help in determining the information source in social situations?Social predictions might be supported by interbrain synchronization (measured via hyperscanning), as discussed by Mayo and Shamay-Tsoory (2024); focus on social learning) and Antonelli et al. (2024); focus on emotion). We propose that one key challenge for this approach is determining the role played by different event-related inputs and responses in the effects: Interpretation of hyperscanning data is plagued by the problem that brains will be “in synch” if two individuals are either perceiving the same events or producing the same behaviour. The brain’s responses to moving our arm or looking at a face are remarkably similar across individuals, such that if two of us perceive or produce the same event our neural response will be matched. Fluctuations in synchronisation according to, e.g., dominance of individuals or levels of excitement on stage, could be determined by fluctuations in whether we attend to, or produce, the same events. It is crucial to understand the fascinating influence of these effects on synchronisation.
Monday, December 09, 2024
An AI framework for neural–behavioral modeling
Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:
Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.
Here is the Sani et al. abstract:
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.
Saturday, December 07, 2024
Battle of the Bots - AI gets creative with Brain Hacks
This post is meant to archive my recent exercise of asking ChatGPT 4o, Perplexity, Gemini+ advanced, and Anthropic (which does not offer links to its responses) to expand my MindBlog's Brain Hacks post into a more complete essay that offers more background and context. They responded with slightly different styles and impressive results that required tweaking for only a few small errors. I thought my more concise text was more inviting, and didn't see anything really new, so my next prompt (which I had asked ChatGPT to edit and improve) said:
"Your responses are well-constructed, but I’m seeking a deeper layer of insight—something that might challenge conventional perspectives or unveil an overlooked nuance. Can you reframe these ideas or introduce a novel approach that stretches beyond familiar interpretations??"
Here things got interesting. Click the links above and scroll down to the responses to this final prompt to see the results. The links most likely are maintained as long as I maintain paying accounts to the bots. (Anthropic does not offer links to its exchanges.)
I pass on here what I think is the most original and striking response, from perplexity. It was right on target, brimming with new perspectives (some of them the subject of previous posts on my MindBlog), and links to a treasure trove of articles and ideas I was not familiar with from seven sources it provided links to. Its response also provided a potpourri or more or less relevant graphic images of brains and models.
Here is the perplexities text, with links to a few of the seven sources it cites, all of which you can obtain directly from perplexity.:
Reframing our understanding of introspective awareness and cognitive modulation requires us to challenge some fundamental assumptions about the nature of consciousness and self-regulation. Let's explore a more provocative perspective that integrates recent neuroscientific findings with evolutionary theory and cognitive science.
The Illusion of Conscious Control
Contrary to the intuitive notion that our conscious mind is the primary driver of behavior and emotion regulation, emerging evidence suggests that consciousness might be more of an observer than a controller. The brain's emotional and regulatory processes often occur before we become consciously aware of them, suggesting that our sense of volitional control might be, to some extent, an illusion.
Consider the possibility that what we perceive as "introspective awareness" is actually a post-hoc narrative constructed by our brain to make sense of unconscious processes. This perspective challenges the traditional view of top-down emotion regulation and suggests a more complex interplay between conscious and unconscious processes.
The Social Evolution of Consciousness
Recent theories propose that human consciousness evolved primarily for social survival rather than individual cognitive enhancement [ref]. This social evolutionary framework suggests that subjective awareness might lack independent causal influence over psychological processes or actions. Instead, it may serve as a communication tool for social interaction and collective decision-making.
This perspective reframes our understanding of self-awareness and emotion regulation. Rather than viewing these abilities as individual cognitive skills, we might consider them as evolved social tools that facilitate group cohesion and cooperation. The ability to regulate emotions, in this context, becomes less about personal control and more about social signaling and coordination.
Neurotransmitter Systems and Emotion Regulation
Recent research has begun to map the overlap between emotion regulation brain networks and specific neurotransmitter systems [ref] [ref]. This approach offers a novel way to understand the biochemical basis of self-regulation. Instead of focusing solely on cognitive strategies or neural networks, we might conceptualize emotion regulation as a dynamic interplay between neurotransmitter systems.
This biochemical perspective opens up new avenues for
understanding individual differences in emotion regulation abilities. It suggests that what we perceive as "willpower" or "self-control" might be more accurately described as the efficient functioning of specific neurotransmitter systems.
The Myth of the Unified Self
Traditional models of self-awareness and emotion regulation often assume a unified, coherent self that observes and controls mental processes. However, neuroscientific evidence increasingly points to a more fragmented view of consciousness and self-hood.
Instead of a single, unified self, we might conceptualize the mind as a coalition of semi-independent processes, each with its own goals and methods of regulation. This "multiple selves" model challenges our intuitive understanding of self-awareness and suggests that emotion regulation might be more about negotiation between different aspects of our psyche than about top-down control.
Conclusion: Towards a New Paradigm
By integrating these perspectives, we can envision a new paradigm for understanding introspective awareness and cognitive modulation. This model would:
1. Recognize the limited role of conscious volition in emotion regulation
2. Emphasize the social and communicative functions of self-awareness
3. Incorporate the biochemical basis of regulatory processes
4. Embrace a more fragmented view of consciousness and selfhood
This approach challenges us to rethink fundamental concepts like free will, personal responsibility, and the nature of the self. It suggests that effective emotion regulation might be less about strengthening willpower and more about understanding and working with the complex, often unconscious processes that shape our emotional lives. By adopting this more nuanced and provocative perspective, we open up new avenues for research and potentially more effective interventions for emotional and cognitive well-being.
Friday, November 29, 2024
MindBlog's Brain Hacks
Introspective awareness and modulation of both ancient and more recently evolved aspects of our cognition:
Brain Hack #1
-The reptilian brain (whose modern descendant is found in the mammalian hypothalamus) generates affective states along axes of arousal and valence, whose states in higher primates can be assessed by introspective awareness.
Brain Hack #2
-The early mammalian emotional brain, whose ability to model a self (correlating with the appearance of the agranular prefrontal cortex), develops the ability to distinguish the difference between being (immersed in) an affective state and seeing (observing) it.
Brain Hack #3
-The appearance in the primate brain of the further ability to imagine the minds of others (correlating with appearance of the granular prefrontal cortex), permits appropriate assignments of agency, being able to distinguish one’s own experience (and problems) from the experience (and problems) of others.
The introspection that enables this ensemble of brain hacks can be strengthened by practice of three fundamental meditation techniques: focused awareness (in which our brain’s attentional mode predominates), open awareness (engaging our default mode network), and non-dual awareness (during which both are muted).
*************
The above is an early draft text that I will be editing further (like my “Tokens of Sanity” post which has had at least six revisions since it 9/29/2024 posting). It is trying to meld together and condense threads from my last public lecture and Max Bennett's recent book "A Brief History of Intelligence." Feedback and comment welcome.
Sunday, September 29, 2024
Tokens of sanity
-Being a calm space in which nothing can hurry
-An animal body that pretends to be human
-Dissociating from the word cloud and emotional reactivities of self and other selves.
-A courteous guest in one’s own body and when with others, owning one’s own experience and letting others own theirs.
-Favoring reflectivity over reactivity, caressing novelty
-Clinging to nothing, the current self being a passing fantasy
-Letting each moment be what it is, not what it should be
-A blip in the flow of cosmic time
Monday, August 26, 2024
The brain simulates actions and their consequences during REM sleep
During REM sleep our brains make up and work though simulated scenarios, while putting our bodies into paralysis so we don't thrash about dangerously.... Senzai and Scanziani show what in going on in mouse brains. Here is the first paragraph (abstract) of their open source text:
Vivid dreams mostly occur during a phase of sleep called REM1–5. During REM sleep, the brain’s internal representation of direction keeps shifting like that of an awake animal moving through its environment6–8. What causes these shifts, given the immobility of the sleeping animal? Here we show that the superior colliculus of the mouse, a motor command center involved in orienting movements9–15, issues motor commands during REM sleep, e.g. turn left, that are similar to those issued in the awake behaving animal. Strikingly, these motor commands, despite not being executed, shift the internal representation of direction as if the animal had turned. Thus, during REM sleep, the brain simulates actions by issuing motor commands that, while not executed, have consequences as if they had been. This study suggests that the sleeping brain, while disengaged from the external world, uses its internal model of the world to simulate interactions with it.
Wednesday, July 10, 2024
From nematodes to humans a common brain network motif intertwines hierarchy and modularity.
Pathak et al. (abstract below) suggest the evolved pattern they describe may apply to information processing networks in general, in particular to those of evolving AI implementations.
SignificanceNervous systems are often schematically represented in terms of hierarchically arranged layers with stimuli in the “input” layer sequentially transformed through successive layers, eventually giving rise to response in the “output” layer. Empirical investigations of hierarchy in specific brain regions, e.g., the visual cortex, typically employ detailed anatomical information. However, a general method for identifying the underlying hierarchy from the connectome alone has so far been elusive. By proposing an optimized index that quantifies the hierarchy extant in a network, we reveal an architectural motif underlying the mesoscopic organization of nervous systems across different species. It involves both modular partitioning and hierarchical layered arrangement, suggesting that brains employ an optimal mix of parallel (modular) and sequential (hierarchic) information processing.Abstract
Networks involved in information processing often have their nodes arranged hierarchically, with the majority of connections occurring in adjacent levels. However, despite being an intuitively appealing concept, the hierarchical organization of large networks, such as those in the brain, is difficult to identify, especially in absence of additional information beyond that provided by the connectome. In this paper, we propose a framework to uncover the hierarchical structure of a given network, that identifies the nodes occupying each level as well as the sequential order of the levels. It involves optimizing a metric that we use to quantify the extent of hierarchy present in a network. Applying this measure to various brain networks, ranging from the nervous system of the nematode Caenorhabditis elegans to the human connectome, we unexpectedly find that they exhibit a common network architectural motif intertwining hierarchy and modularity. This suggests that brain networks may have evolved to simultaneously exploit the functional advantages of these two types of organizations, viz., relatively independent modules performing distributed processing in parallel and a hierarchical structure that allows sequential pooling of these multiple processing streams. An intriguing possibility is that this property we report may be common to information processing networks in general.
Friday, December 15, 2023
The ebb and flow of physical and cognitive fatigue
A key insight from this work is that there are important parallels between physical and cognitive fatigue. Specifically, for both types of fatigue, the best-fitting computational model parsed fatigue into two components: a recoverable component (i.e., the share of fatigue that increases with effort and recovers with rest) and an unrecoverable component (i.e., the share of fatigue that only increases with effort and does not recover with rest, at least not within the ~1-h session). For physical fatigue, this result conceptually replicates a previous study; for cognitive fatigue, this result is new and original. Together, these findings paint a nuanced picture of how both physical and cognitive fatigue emerge over time. From an applied angle, the distinction between recoverable and unrecoverable fatigue is intriguing: it has potential implications for task design, such as in the contexts of education and work...On balance, physical and cognitive fatigue may be more similar than they are different.Here is the Maltthews et al. abstract:
The willingness to exert effort for reward is essential but comes at the cost of fatigue. Theories suggest fatigue increases after both physical and cognitive exertion, subsequently reducing the motivation to exert effort. Yet a mechanistic understanding of how this happens on a moment-to-moment basis, and whether mechanisms are common to both mental and physical effort, is lacking. In two studies, participants reported momentary (trial-by-trial) ratings of fatigue during an effort-based decision-making task requiring either physical (grip-force) or cognitive (mental arithmetic) effort. Using a novel computational model, we show that fatigue fluctuates from trial-to-trial as a function of exerted effort and predicts subsequent choices. This mechanism was shared across the domains. Selective to the cognitive domain, committing errors also induced momentary increases in feelings of fatigue. These findings provide insight into the computations underlying the influence of effortful exertion on fatigue and motivation, in both physical and cognitive domains.
Monday, October 23, 2023
Architectural experience influences the processing of others’ body expressions
An open source article by Presti et al:
Significance
The motor system has been recognized as a fundamental neural machinery for spatial and social cognition, making the study of the interplay between architecture and social behavior worthwhile. Here, we tested how a virtual architectural experience alters the subsequent processing of body expressions, showing that the motor system participates at two distinct stages: the earliest influenced by the dynamic architectural experience and the latter modulated by the actual physical characteristics. These findings highlight the existence of an overlapping motor neural substrate devoted to spatial and social cognition, with the architectural space exerting an early and possibly adapting effect on the later social experience. Ultimately, spatial design may impact the processing of human emotions.Abstract
The interplay between space and cognition is a crucial issue in Neuroscience leading to the development of multiple research fields. However, the relationship between architectural space and the movement of the inhabitants and their interactions has been too often neglected, failing to provide a unifying view of architecture's capacity to modulate social cognition broadly. We bridge this gap by requesting participants to judge avatars’ emotional expression (high vs. low arousal) at the end of their promenade inside high- or low-arousing architectures. Stimuli were presented in virtual reality to ensure a dynamic, naturalistic experience. High-density electroencephalography (EEG) was recorded to assess the neural responses to the avatar’s presentation. Observing highly aroused avatars increased Late Positive Potentials (LPP), in line with previous evidence. Strikingly, 250 ms before the occurrence of the LPP, P200 amplitude increased due to the experience of low-arousing architectures, reflecting an early greater attention during the processing of body expressions. In addition, participants stared longer at the avatar’s head and judged the observed posture as more arousing. Source localization highlighted a contribution of the dorsal premotor cortex to both P200 and LPP. In conclusion, the immersive and dynamic architectural experience modulates human social cognition. In addition, the motor system plays a role in processing architecture and body expressions suggesting that the space and social cognition interplay is rooted in overlapping neural substrates. This study demonstrates that the manipulation of mere architectural space is sufficient to influence human social cognition.
Monday, October 09, 2023
What your brain is doing after the light turns green.
Gandhi and collaboratores show that if you step out to cross the street without looking right or left the neural activity in the brain is different than if you look from side to side first to be sure no cars are coming. Population level analysis of movement-related transient activity patterns in a population of superior colliculus neurons change in the two different contexts, and this difference is not readily identifiable in single-unit recordings. Here is their technical abstract:
Sensorimotor transformation is the process of first sensing an object in the environment and then producing a movement in response to that stimulus. For visually guided saccades, neurons in the superior colliculus (SC) emit a burst of spikes to register the appearance of stimulus, and many of the same neurons discharge another burst to initiate the eye movement. We investigated whether the neural signatures of sensation and action in SC depend on context. Spiking activity along the dorsoventral axis was recorded with a laminar probe as Rhesus monkeys generated saccades to the same stimulus location in tasks that require either executive control to delay saccade onset until permission is granted or the production of an immediate response to a target whose onset is predictable. Using dimensionality reduction and discriminability methods, we show that the subspaces occupied during the visual and motor epochs were both distinct within each task and differentiable across tasks. Single-unit analyses, in contrast, show that the movement-related activity of SC neurons was not different between tasks. These results demonstrate that statistical features in neural activity of simultaneously recorded ensembles provide more insight than single neurons. They also indicate that cognitive processes associated with task requirements are multiplexed in SC population activity during both sensation and action and that downstream structures could use this activity to extract context. Additionally, the entire manifolds associated with sensory and motor responses, respectively, may be larger than the subspaces explored within a certain set of experiments.
Monday, September 25, 2023
Emergent analogical reasoning in large language models
The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.
Wednesday, September 20, 2023
Chemistry that regulates whether we stay with what we're doing or try something new
Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:
Significance
Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.Abstract
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.
Monday, August 28, 2023
A shared novelty-seeking basis for creativity and curiosity
Curiosity and creativity are central pillars of human growth and invention. While they have been studied extensively in isolation, the relationship between them has not yet been established. We propose that curiosity and creativity both emanate from the same mechanism of novelty-seeking. We first present a synthesis showing that curiosity and creativity are affected similarly by a number of key cognitive faculties such as memory, cognitive control, attention, and reward. We then review empirical evidence from neuroscience research, indicating that the same brain regions are involved in both curiosity and creativity, focusing on the interplay between three major brain networks: the default-mode network, the salience network, and the executive control network. After substantiating the link between curiosity and creativity, we propose a novelty- seeking model (NSM) that underlies them both and suggest that the manifestation of the NSM is governed by one’s state of mind (SoM).
Wednesday, July 26, 2023
A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4
This post is using MindBlog to archive a further example example of the sort shown in Monday’s post ...of GPT bots editing a paragraph I wrote, this one being on non-dual awareness. The paragraph:
The fact that there is no there there means it. Empty is empty. The ground state is just the mute body doing its thing. From its depths storylines rise and fall, bonbons to savor or ignore. The body can know that it has found a new ground state, and can be no longer willing to just play nice if the upstairs word salad is dinking around with or threatening its peace. It can withdraw from the fray. It can rest as a calm non-dual self centered within a cosmic perspective of time and space that views the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.
After incorporating some changes suggested by Google Bard and GPT4:
When we say something is empty, we mean just that: it contains nothing. Our natural baseline or 'ground state' is the body performing its automatic functions without our conscious intervention. From the body's inherent depths, various thoughts or experiences emerge and recede, like bonbons we can savor or ignore. The body can grow to know that it has found a new ground state in the present moment, a ground state of awareness that no longer needs to be caught up in word salads of disruptive thoughts that can threaten its peace. It can withdraw from the fray and can rest as a calm non-dual state of being centered within a cosmic perspective of time and space that can view the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.
Wednesday, June 21, 2023
Turing, von Neumann, and the computational architecture of biological machines
In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing’s machine, the American–Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.
Monday, May 15, 2023
People who talk too much
I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World." The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges:
-When possible, say nothing
-Master the power of the pause
-Quit social media
-Seek out silence
-Learn how to listen
Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial. (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)