This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Thursday, April 24, 2025
Monday, April 07, 2025
Mastering diverse control tasks through world models
Hafner et al. offer an amazing open source article that presents an algorithm s mimicking the way in which our brains actually solves problems. (see Bennett's book for an elegant explanation of types of reinforcement learning) I'm passing on just the abstract followed by an introductory paragraph. Go to the article for the referenced graphics.
Developing a general algorithm that learns to solve tasks across a wide range of applications has been a fundamental challenge in artificial intelligence. Although current reinforcement-learning algorithms can be readily applied to tasks similar to what they have been developed for, configuring them for new application domains requires substantial human expertise and experimentation1,2. Here we present the third generation of Dreamer, a general algorithm that outperforms specialized methods across over 150 diverse tasks, with a single configuration. Dreamer learns a model of the environment and improves its behaviour by imagining future scenarios. Robustness techniques based on normalization, balancing and transformations enable stable learning across domains. Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula. This achievement has been posed as a substantial challenge in artificial intelligence that requires exploring farsighted strategies from pixels and sparse rewards in an open world3. Our work allows solving challenging control problems without extensive experimentation, making reinforcement learning broadly applicable.
Here we present Dreamer, a general algorithm that outperforms specialized expert algorithms across a wide range of domains while using fixed hyperparameters, making reinforcement learning readily applicable to new problems. The algorithm is based on the idea of learning a world model that equips the agent with rich perception and the ability to imagine the future15,16,17. As shown in Fig. 1, the world model predicts the outcomes of potential actions, a critic neural network judges the value of each outcome and an actor neural network chooses actions to reach the best outcomes. Although intuitively appealing, robustly learning and leveraging world models to achieve strong task performance has been an open problem18. Dreamer overcomes this challenge through a range of robustness techniques based on normalization, balancing and transformations. We observe robust learning across over 150 tasks from the domains summarized in Fig. 2, as well as across model sizes and training budgets. Notably, larger models not only achieve higher scores but also require less interaction to solve a task, offering practitioners a predictable way to increase performance and data efficiency.
Blog Categories:
AI,
brain plasticity,
memory/learning
Tuesday, April 01, 2025
An example of AI representing concepts outside the current sphere of human knowledge that are teachable to human experts.
An open source article from the latest PNAS from Schut et al.:
Significance
As
AI systems become more capable, they may internally represent concepts
outside the sphere of human knowledge. This work gives an end-to-end
example of unearthing machine-unique knowledge in the domain of chess.
We obtain machine-unique knowledge from an AI system (AlphaZero) by a
method that finds novel yet teachable concepts and show that it can be
transferred to human experts (grandmasters). In particular, the new
knowledge is learned from internal mathematical representations without a
priori knowing what it is or where to start. The produced knowledge
from AlphaZero (new chess concepts) is then taught to four grandmasters
in a setting where we can quantify their learning, showing that
machine-guided discovery and teaching is possible at the highest human
level.
Abstract
AI
systems have attained superhuman performance across various domains. If
the hidden knowledge encoded in these highly capable systems can be
leveraged, human knowledge and performance can be advanced. Yet, this
internal knowledge is difficult to extract. Due to the vast space of
possible internal representations, searching for meaningful new
conceptual knowledge can be like finding a needle in a haystack. Here,
we introduce a method that extracts new chess concepts from AlphaZero,
an AI system that mastered chess via self-play without human
supervision. Our method excavates vectors that represent concepts from
AlphaZero’s internal representations using convex optimization, and
filters the concepts based on teachability (whether the concept is
transferable to another AI agent) and novelty (whether the concept
contains information not present in human chess games). These steps
ensure that the discovered concepts are useful and meaningful. For the
resulting set of concepts, prototypes (chess puzzle–solution pairs) are
presented to experts for final validation. In a preliminary human study,
four top chess grandmasters (all former or current world chess
champions) were evaluated on their ability to solve concept prototype
positions. All grandmasters showed improvement after the learning phase,
suggesting that the concepts are at the frontier of human
understanding. Despite the small scale, our result is a proof of concept
demonstrating the possibility of leveraging knowledge from a highly
capable AI system to advance the frontier of human knowledge; a
development that could bear profound implications and shape how we
interact with AI systems across many applications.
Belief in belief, like religion, is a cross-cultural human universal
Fascinating open source research reported by Gervais et al. (Open source):
Significance
Religion
is a cross-cultural human universal, and religions may have been
instrumental in the cultural evolution of widespread cooperation and
prosociality. Nonetheless, religiosity has rapidly declined in some
parts of the world over just a handful of decades. We tested whether
long-standing religious influence intuitively lingers, even in overtly
secular and nonreligious societies. Using a classic experimental
philosophy task, we found that even atheists in nonreligious societies
show evidence of intuitive preferences for religious belief over
atheism. This is compelling cross-cultural experimental evidence for
intuitive preferences for religion among nonbelievers—a hypothesized
phenomenon that philosopher Daniel Dennett dubbed belief in belief.
Abstract
We find evidence of belief in belief—intuitive
preferences for religious belief over atheism, even among atheist
participants—across eight comparatively secular countries. Religion is a
cross-cultural human universal, yet explicit markers of religiosity
have rapidly waned in large parts of the world in recent decades. We
explored whether intuitive religious influence lingers, even among
nonbelievers in largely secular societies. We adapted a classic
experimental philosophy task to test for this intuitive belief in belief
among people in eight comparatively nonreligious countries: Canada,
China, Czechia, Japan, the Netherlands, Sweden, the United Kingdom, and
Vietnam (total N = 3,804). Our analyses revealed strong evidence
that 1) people intuitively favor religious belief over atheism and that
2) this pattern was not moderated by participants’ own self-reported
atheism. Indeed, 3) even atheists in relatively secular societies
intuitively prefer belief to atheism. These inferences were robust
across different analytic strategies and across other measures of
individual differences in religiosity and religious instruction.
Although explicit religious belief has rapidly declined in these
countries, it is possible that belief in belief may still persist. These
results speak to the complex psychological and cultural dynamics of
secularization.
Subscribe to:
Posts (Atom)