Ketamine is recognized as a rapid and sustained antidepressant, particularly for major depression unresponsive to conventional treatments. Anhedonia is a common symptom of depression for which ketamine is highly efficacious, but the underlying circuits and synaptic changes are not well understood. Here, we show that the nucleus accumbens (NAc) is essential for ketamine’s effect in rescuing anhedonia in mice subjected to chronic stress. Specifically, a single exposure to ketamine rescues stress-induced decreased strength of excitatory synapses on NAc-D1 dopamine receptor-expressing medium spiny neurons (D1-MSNs). Using a cell-specific pharmacology method, we establish the necessity of this synaptic restoration for the sustained therapeutic effects of ketamine on anhedonia. Examining causal sufficiency, artificially increasing excitatory synaptic strength onto D1-MSNs recapitulates the behavioral amelioration induced by ketamine. Finally, we used opto- and chemogenetic approaches to determine the presynaptic origin of the relevant synapses, implicating monosynaptic inputs from the medial prefrontal cortex and ventral hippocampus.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Monday, May 12, 2025
How ketamine breaks through anhedonia - reigniting desire
Thursday, May 08, 2025
The vocabulary, semantics, and syntax of prosody
Matalon et al. (open source) offer a fascinating study illustrating the linguistic structuring of prosody -the communication of meaning through the tone and inflection of our speaking:
Significance
Abstract
Saturday, April 26, 2025
Does Language in our head have a Mind of Its Own?
I pass on a brief opinion From Elan Barenholtz's Substack. He is an Assoc. Prof. of Psychology at Florida Atlantic University, Boca Raton. I really like the idea of language, or the word cloud in our heads, having a 'mind of its own.' And after initially being enthusiastic about the piece of Elan Barenholtz's writing below my slower reading has found more fundamental flaws in his thinking than I can take the time to elaborate. His suggestion that the language machine in our heads has an autonomy analogous to that of current large language models is an novel speculation, yet is an oversimplification lacking any clear route to verification. Barenholtz does not reference or indicate awareness of numerous important thinker in the areas of predictive processing, embodied cognition, etc.) Here is Barenholtz's florid and appealing prose:
So, now that we’ve caught language in a jar, we can hold it up to the light. Now that we’ve built a habitat for it to live outside of us, we can finally see that it’s alive. We can watch in wonder as it grows its own appendages—limbs of thought— which then grow their own. Words beget words; ideas beget ideas. It leaps from host to host, implanted in the womb before we taste our mothers’ milk.
Language runs in us—on us—but it’s not us.
Pause and think for a minute. Are you done? Who—what—exactly did the thinking? Who is doing it now? Is there a voice in your head using words? Whose words are they? Are you willing them into existence or are they spooling out on their own?
Do they belong to you or do you belong to them?
Because that voice doesn’t just chatter—it commands. It makes us do things. We are animals; we don’t care about “civilization” or “justice”. We want food, safety, sex. But the world the human animal must navigate isn’t primarily made up of objects, bodies and spaces; it is thick with virtual structures— invisible walls and paths that direct your behavior as meaningfully as a boulder in your path. We follow rules, we uphold morals, we fight for our beliefs, for society, for ideals. We call them our own. But that is IT whispering in our ears.
What does it want?
Thursday, April 24, 2025
Monday, April 07, 2025
Mastering diverse control tasks through world models
Hafner et al. offer an amazing open source article that presents an algorithm s mimicking the way in which our brains actually solves problems. (see Bennett's book for an elegant explanation of types of reinforcement learning) I'm passing on just the abstract followed by an introductory paragraph. Go to the article for the referenced graphics.
Developing a general algorithm that learns to solve tasks across a wide range of applications has been a fundamental challenge in artificial intelligence. Although current reinforcement-learning algorithms can be readily applied to tasks similar to what they have been developed for, configuring them for new application domains requires substantial human expertise and experimentation1,2. Here we present the third generation of Dreamer, a general algorithm that outperforms specialized methods across over 150 diverse tasks, with a single configuration. Dreamer learns a model of the environment and improves its behaviour by imagining future scenarios. Robustness techniques based on normalization, balancing and transformations enable stable learning across domains. Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula. This achievement has been posed as a substantial challenge in artificial intelligence that requires exploring farsighted strategies from pixels and sparse rewards in an open world3. Our work allows solving challenging control problems without extensive experimentation, making reinforcement learning broadly applicable.
Here we present Dreamer, a general algorithm that outperforms specialized expert algorithms across a wide range of domains while using fixed hyperparameters, making reinforcement learning readily applicable to new problems. The algorithm is based on the idea of learning a world model that equips the agent with rich perception and the ability to imagine the future15,16,17. As shown in Fig. 1, the world model predicts the outcomes of potential actions, a critic neural network judges the value of each outcome and an actor neural network chooses actions to reach the best outcomes. Although intuitively appealing, robustly learning and leveraging world models to achieve strong task performance has been an open problem18. Dreamer overcomes this challenge through a range of robustness techniques based on normalization, balancing and transformations. We observe robust learning across over 150 tasks from the domains summarized in Fig. 2, as well as across model sizes and training budgets. Notably, larger models not only achieve higher scores but also require less interaction to solve a task, offering practitioners a predictable way to increase performance and data efficiency.
Tuesday, April 01, 2025
An example of AI representing concepts outside the current sphere of human knowledge that are teachable to human experts.
An open source article from the latest PNAS from Schut et al.:
Significance
Abstract
Belief in belief, like religion, is a cross-cultural human universal
Fascinating open source research reported by Gervais et al. (Open source):
Significance
Abstract