Showing posts with label acting/choosing. Show all posts
Showing posts with label acting/choosing. Show all posts

Friday, December 15, 2023

The ebb and flow of physical and cognitive fatigue

Matthews et al. Have investigated moment-to-moment fluctuations in fatigue using behavioral experiments and computational modeling to offer a precise account of how fatigue waxes (during physical and cognitive effort) and wanes (during rest).   From Bijleveld's review of the work:
A key insight from this work is that there are important parallels between physical and cognitive fatigue. Specifically, for both types of fatigue, the best-fitting computational model parsed fatigue into two components: a recoverable component (i.e., the share of fatigue that increases with effort and recovers with rest) and an unrecoverable component (i.e., the share of fatigue that only increases with effort and does not recover with rest, at least not within the ~1-h session). For physical fatigue, this result conceptually replicates a previous study; for cognitive fatigue, this result is new and original. Together, these findings paint a nuanced picture of how both physical and cognitive fatigue emerge over time. From an applied angle, the distinction between recoverable and unrecoverable fatigue is intriguing: it has potential implications for task design, such as in the contexts of education and work...On balance, physical and cognitive fatigue may be more similar than they are different.
Here is the Maltthews et al. abstract:
The willingness to exert effort for reward is essential but comes at the cost of fatigue. Theories suggest fatigue increases after both physical and cognitive exertion, subsequently reducing the motivation to exert effort. Yet a mechanistic understanding of how this happens on a moment-to-moment basis, and whether mechanisms are common to both mental and physical effort, is lacking. In two studies, participants reported momentary (trial-by-trial) ratings of fatigue during an effort-based decision-making task requiring either physical (grip-force) or cognitive (mental arithmetic) effort. Using a novel computational model, we show that fatigue fluctuates from trial-to-trial as a function of exerted effort and predicts subsequent choices. This mechanism was shared across the domains. Selective to the cognitive domain, committing errors also induced momentary increases in feelings of fatigue. These findings provide insight into the computations underlying the influence of effortful exertion on fatigue and motivation, in both physical and cognitive domains.


 

 

Monday, October 23, 2023

Architectural experience influences the processing of others’ body expressions

An open source article by Presti et al:  

Significance

The motor system has been recognized as a fundamental neural machinery for spatial and social cognition, making the study of the interplay between architecture and social behavior worthwhile. Here, we tested how a virtual architectural experience alters the subsequent processing of body expressions, showing that the motor system participates at two distinct stages: the earliest influenced by the dynamic architectural experience and the latter modulated by the actual physical characteristics. These findings highlight the existence of an overlapping motor neural substrate devoted to spatial and social cognition, with the architectural space exerting an early and possibly adapting effect on the later social experience. Ultimately, spatial design may impact the processing of human emotions.
Abstract
The interplay between space and cognition is a crucial issue in Neuroscience leading to the development of multiple research fields. However, the relationship between architectural space and the movement of the inhabitants and their interactions has been too often neglected, failing to provide a unifying view of architecture's capacity to modulate social cognition broadly. We bridge this gap by requesting participants to judge avatars’ emotional expression (high vs. low arousal) at the end of their promenade inside high- or low-arousing architectures. Stimuli were presented in virtual reality to ensure a dynamic, naturalistic experience. High-density electroencephalography (EEG) was recorded to assess the neural responses to the avatar’s presentation. Observing highly aroused avatars increased Late Positive Potentials (LPP), in line with previous evidence. Strikingly, 250 ms before the occurrence of the LPP, P200 amplitude increased due to the experience of low-arousing architectures, reflecting an early greater attention during the processing of body expressions. In addition, participants stared longer at the avatar’s head and judged the observed posture as more arousing. Source localization highlighted a contribution of the dorsal premotor cortex to both P200 and LPP. In conclusion, the immersive and dynamic architectural experience modulates human social cognition. In addition, the motor system plays a role in processing architecture and body expressions suggesting that the space and social cognition interplay is rooted in overlapping neural substrates. This study demonstrates that the manipulation of mere architectural space is sufficient to influence human social cognition.

Monday, October 09, 2023

What your brain is doing after the light turns green.

 Gandhi and collaboratores show that if you step out to cross the street without looking right or left the neural activity in the brain is different than if you look from side to side first to be sure no cars are coming. Population level analysis of movement-related transient activity patterns in a population of superior colliculus neurons change in the two different contexts, and this difference is not readily identifiable in single-unit recordings.  Here is their technical abstract:

Sensorimotor transformation is the process of first sensing an object in the environment and then producing a movement in response to that stimulus. For visually guided saccades, neurons in the superior colliculus (SC) emit a burst of spikes to register the appearance of stimulus, and many of the same neurons discharge another burst to initiate the eye movement. We investigated whether the neural signatures of sensation and action in SC depend on context. Spiking activity along the dorsoventral axis was recorded with a laminar probe as Rhesus monkeys generated saccades to the same stimulus location in tasks that require either executive control to delay saccade onset until permission is granted or the production of an immediate response to a target whose onset is predictable. Using dimensionality reduction and discriminability methods, we show that the subspaces occupied during the visual and motor epochs were both distinct within each task and differentiable across tasks. Single-unit analyses, in contrast, show that the movement-related activity of SC neurons was not different between tasks. These results demonstrate that statistical features in neural activity of simultaneously recorded ensembles provide more insight than single neurons. They also indicate that cognitive processes associated with task requirements are multiplexed in SC population activity during both sensation and action and that downstream structures could use this activity to extract context. Additionally, the entire manifolds associated with sensory and motor responses, respectively, may be larger than the subspaces explored within a certain set of experiments.

Monday, September 25, 2023

Emergent analogical reasoning in large language models

Things are moving very fast in AI development. From Webb et al:
The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Wednesday, September 20, 2023

Chemistry that regulates whether we stay with what we're doing or try something new

Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:  

Significance

Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.
Abstract
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.

Monday, August 28, 2023

A shared novelty-seeking basis for creativity and curiosity

I pass on the abstract of a target article having the title of this post, sent to me by Behavioral and Brain Science. I'm reading through it, and would be willing to send a PDF of the article to motivated MindBlog readers who wish to check it out.
Curiosity and creativity are central pillars of human growth and invention. While they have been studied extensively in isolation, the relationship between them has not yet been established. We propose that curiosity and creativity both emanate from the same mechanism of novelty-seeking. We first present a synthesis showing that curiosity and creativity are affected similarly by a number of key cognitive faculties such as memory, cognitive control, attention, and reward. We then review empirical evidence from neuroscience research, indicating that the same brain regions are involved in both curiosity and creativity, focusing on the interplay between three major brain networks: the default-mode network, the salience network, and the executive control network. After substantiating the link between curiosity and creativity, we propose a novelty- seeking model (NSM) that underlies them both and suggest that the manifestation of the NSM is governed by one’s state of mind (SoM).

Wednesday, July 26, 2023

A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4

This post is using MindBlog to archive a further example example of the sort shown in Monday’s post ...of GPT bots editing  a paragraph I wrote, this one being on non-dual awareness.  The paragraph:

The fact that  there is no there there means it. Empty is empty. The ground state is just the mute body doing its thing. From its depths storylines rise and fall, bonbons to savor or ignore. The body can know that it has found a new ground state, and can be no longer willing to just play nice if the upstairs word salad is dinking around with or threatening its peace. It can withdraw from the fray. It can rest as a calm non-dual self centered within a cosmic perspective of time and space that views the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.

After incorporating some changes suggested by Google Bard and GPT4:

When we say something is empty, we mean just that: it contains nothing. Our natural baseline or 'ground state' is the body performing its automatic functions without our conscious intervention. From the body's inherent depths, various thoughts or experiences emerge and recede, like bonbons we can savor or ignore. The body can grow to know that it has found a new ground state in the present moment, a ground state of awareness that no longer needs to be caught up in word salads of disruptive thoughts that can threaten its peace.  It can withdraw from the fray and can rest as a calm non-dual state of being centered within a cosmic perspective of time and space that can view the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.


Wednesday, June 21, 2023

Turing, von Neumann, and the computational architecture of biological machines

I pass on the abstract of a PNAS perspective article by Hashim M. Al-Hashimi (motivated readers can obtain a PDF of the article from me).
In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing’s machine, the American–Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.

Monday, May 15, 2023

People who talk too much

I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World."  The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges: 

-When possible, say nothing

-Master the power of the pause

-Quit social media

-Seek out silence

-Learn how to listen

Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial.  (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)

Friday, May 12, 2023

Virality

This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the  seventh section of her book, titled "Virality"

The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our  human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.

The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.

When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
            
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.

The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.

Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”

If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.

If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.”  She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”


Friday, May 05, 2023

The Data Deluge - Dataism

This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the  sixth section of her book, titled "Algorithm."

Chapter 11  

In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.

What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.

These mathematical tools can predict and understand the world more adequately than any theory could.  Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...

Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”

Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
            
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.

Chapter 12

Here are a smaller number of clips from the last section of Chapter 12,  on the errors of algorithms.   

It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.

It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.

For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
            
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
            
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.

This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.

 

Monday, May 01, 2023

Panpsychism and Metonymy

This post is the seventh installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 9 and 10 from the  fifth section of her book,  titled "Metonymy"

Chapter 9

Panpsychism has surfaced from time to time over the centuries, as in the philosophy of Bertrand Russell and Arthur Eddington, who realized that the two most notable “gaps” in physicalism—the problem of consciousness and the “problem of intrinsic natures” (the question of what matter is)—could be solved in one fell swoop. Physics could not tell us what matter was made out of, and nobody could understand what consciousness was, so maybe consciousness was, in fact, the fundamental nature of all matter. Mental states were the intrinsic nature of physical states…The impasse surrounding the hard problem of consciousness and the weirdness of the quantum world has created a new openness to the notion that the mind should have never been excluded from the physical sciences in the first place.

Some neuroscientists have arrived at panpsychism not through philosophy but via information theory. One of the leading contemporary theories of consciousness is integrated information theory, or IIT. Pioneered by Giulio Tononi and Christof Koch…IIT holds that consciousness is bound up with the way that information is “integrated” in the brain. Information is considered integrated when it cannot be easily localized but instead relies on highly complex connections across different regions of the brain…They have come up with a specific number, Φ, or phi, which they believe is a threshold and is designed to measure the interdependence of different parts of a system…many other creatures have a nonzero level of phi, which means that they too are conscious—as are atoms, quarks, and some single-celled organisms…Unlike emergentism and other systems theories that cleverly redefine terms like “consciousness” and “cognition” so that they apply to forests and “insect colonies, panpsychists believe that these entities truly possess some kind of phenomenal experience—that it feels like something to be a mouse, an amoeba, or a quark…Although the theory is still a minority position within academia, there is undoubtedly more openness today to theories that upturn modern orthodoxies to extend consciousness down the chain of being.  

“While popular debates about the theory rarely extend beyond the plausibility of granting consciousness to bees and trees, it contains far more radical implications. To claim that reality itself is mental is to acknowledge that there exists no clear boundary between the subjective mind and the objective world. When Bacon denounced our tendency to project inner longings onto scientific theories, he took it for granted—as most of us do today—that the mind is not part of the physical world, that meaning is an immaterial idea that does not belong to objective reality. But if consciousness is the ultimate substrate of everything, these distinctions become blurred, if not totally irrelevant. It’s possible that there exists a symmetry between our interior lives and the world at large, that the relationship between them is not one of paradox but of metonymy—the mind serving as a microcosm of the world’s macroscopic consciousness. Perhaps it is not even a terrible leap to wonder whether the universe can communicate with us, whether life is full of “correspondences,” as the spiritualists called them, between ourselves and the transcendent realm—whether, to quote Emerson, “the whole of nature is a metaphor of the human mind.

Although integrated information theory is rooted in longstanding analogies between the brain and digital technologies, it remains uncertain whether the framework allows for machine consciousness. Koch argues that nothing in ITT necessitates that consciousness is unique to organic forms of life… So long as a system meets the minimum requirements of integrated information, it could in principle become conscious, regardless of whether it’s made of silicon or brain tissue. However, most digital computers have sparse and fragmented connectivity that doesn’t allow for a high level of integration.  

One of the central problems in panpsychism is the “combination problem.” This is the challenge of explaining how conscious microsystems give way to larger systems of unified consciousness. If neurons are conscious—and according to Koch they have enough phi for “an itsy-bitsy amount of experience”—and my brain is made of billions of neurons, then why do I have only one mind and not billions? Koch’s answer is that a system can be conscious only so long as it does not contain and is not contained within something with a higher level of integration. While individual neurons cultured in a petri dish might be conscious, the neurons in an actual brain are not, because they are subsumed within a more highly integrated system...This is why humans are conscious while society as a whole is not. Although society is the larger conglomerate, it is less integrated than the human brain, which is why humans do not become swallowed up in the collective consciousness the way that neurons do.

It is, however, undeniable that society is becoming more and more integrated. Goff pointed out recently that if IIT is correct, then social connectivity is a serious existential threat. Assuming that the internet reaches a point where its information is more highly integrated than that of the human brain, it would become conscious, while all our individual human brains would become absorbed into the collective mind. “Brains would cease to be conscious in their own right,” Goff writes, “and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.” Goff likens this scenario to the visions of Pierre Teilhard de Chardin, the French Jesuit priest who, as we’ve seen, prophesied the coming Omega Point and inspired aspects of transhumanism. Once humanity is sufficiently connected via our information technologies, Teilhard predicted, we will all fuse into a single universal mind—the noosphere—enacting the Kingdom of Heaven that Christ promised.
           
This is already happening, of course, at a pace that is largely imperceptible - in the speed with which ideas go viral, cascading across social platforms, such that the users who share them begin to seem less like agents than as hosts, nodes in the enormous brain…in the efficiency of consensus, the speed with which opinions fuse and solidify alongside the news cycle, like thought coalescing in the collective consciousness. We have terms that attempt to catalogue this merging—the “hive mind,” “groupthink” -  times when I become aware of my own blurred boundaries, seized by the suspicion that I am not forming new opinions so much as assimilating them…I don’t know what to call this state of affairs, but it does not feel like the Kingdom of God.


Chapter 10

From the end of the chapter:

 “Idealism and panpsychism are appealing in that they offer a way of believing once again in the mind—not as an illusion or an epiphenomenon but as a feature of our world that is as real as anything else. But its proponents rarely stop there. In some cases they go on to make the larger claim that there must therefore exist some essential symmetry between the mind and the world, that the patterns we observe in our interior lives correspond to a more expansive, transcendent truth. Proponents of these theories occasionally appeal to quantum physics to argue that the mind-matter dichotomy is false—clearly there exists some mysterious relationship between the two. But one could just as easily argue that physics has, on the contrary, affirmed this chasm, demonstrating that the world at its most fundamental level is radically other than ourselves—that the universe is, as Erwin Schrödinger put it, “not even thinkable.”

This is precisely the modern tension that Arendt calls attention to in The Human Condition. On the one hand, the appearance of order in the world—the elegance of physical laws, the usefulness of mathematics—tempts us to believe that our mind is made is made in its image, that “the same patterns rule the macrocosm and the microcosm alike.” In the enchanted world order was seen as proof of eternal unity, evidence that God was present in all things, but for the modern person this symmetry leads inevitably back to Cartesian doubt—the suspicion that the order perceived stems from some mental deception. We have good reason to entertain such suspicions, Arendt argues. Since Copernicus and Galileo, science has overturned the most basic assumptions about reality and suggested that our sensory perception is unreliable. This conclusion became unavoidable with the discovery of general relativity and quantum physics, which suggest that “causality, necessity, and lawfulness are categories inherent in the human brain and applicable only to the common-sense experiences of earthbound creatures.” We keep trying to reclaim the Archimedean point, hoping that science will allow us to transcend the prison of our perception and see the world objectively. But the world that science reveals is so alien and bizarre that whenever we try to look beyond our human vantage point, we are confronted with our own reflection. “It is really as though we were in the hands of an evil spirit,” Arendt writes, alluding to Descartes’s thought experiment, “who mocks us and frustrates our thirst for knowledge, so that whenever we search for that which we are not, we encounter only the “patterns of our own minds.”
           
That is not to say that the Archimedean point is no longer possible.  In her 1963 essay “The Conquest of Space and the Stature of Man,” Arendt considers this modern problem in light of emerging technologies. The oddest thing, she notes, is that even though our theories about the world are limited and simplistic and probably wrong, they “work” when implemented into technologies. Despite the fact that nobody understands what quantum mechanics is telling us about the world, the entire computer “age—including every semiconductor, starting with the very first transistor, built in 1947—has rested on well-modeled quantum behavior and reliable quantum equations. The problem is not merely that we cannot understand the world but that we can now build this alien logic into our devices. There are some scientists, Arendt notes, who claim that computers can do “what a human brain cannot comprehend.” Her italics are instructive: it’s not merely that computers can transcend us in sheer brain power—solving theorems faster than we can, finding solutions more efficiently—but that they can actually understand the world in a way that we cannot. She found this proposition especially alarming. “If it should be true…that we are surrounded by machines whose doings we cannot comprehend although we have devised and constructed them,” she writes, “it would mean that the theoretical perplexities of the natural sciences on the highest level have invaded our everyday world.” This conclusion was remarkably prescient.”


 

Friday, April 28, 2023

Are we living in a simulated world?

This post is the sixth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 8 from the  fourth section of her book,  titled "Paradox."

Bostrom, a prominent transhumanist, believes that humanity is in the process of becoming posthuman as we merge our bodies with technology. We are becoming superintelligence ourselves. His simulation hypothesis begins by imagining a future, many generations from now, when posthumans have achieved an almost godlike mastery over the world. One of the things these posthumans might do, Bostrom proposes, is create simulations—digital environments that contain entire worlds…The inhabitants will not know that they are living in a simulation but will believe their world is all that exists…the theory’s popularity has escalated over the past decade or so. It has gained an especially fervent following among scientists and Silicon Valley luminaries, including Neil deGrasse Tyson and Elon Musk, who have come out as proponents…It has become, in other words, the twenty-first century’s favored variation on Descartes’s skeptical thought experiment—the proposition that our minds are lying to us, that the world is radically other than it seems.

…for all its supposed “naturalism,” the simulation hypothesis is ultimately an argument from design. It belongs to a long lineage of creationist rhetoric that invoke human technologies to argue that the universe could not have come about without the conscious intention of a designer.,.Bostrom acknowledged in his paper that there were “some loose analogies” that could be drawn between the simulation hypothesis and traditional religious concepts. The programmers who created the simulation would be like gods compared to those of us within the simulation.

One of the common objections to the informational universe is that information cannot be “ungrounded,” without a material instantiation. Claude Shannon, the father of information theory, insisted that information had to exist in some kind of physical medium, like computer hardware…if the universe were an enormous computer, then this information would in fact be instantiated on something material, akin to a hard drive. We wouldn’t be able to see or detect it because it would exist in the universe of the programmers who built it. All we would notice was its higher-level structure, the abstract patterns and laws that were part of its software. The simulation hypothesis, in other words, could explain why our universe is imbued with discernible patterns and mathematical regularities while also explaining how those patterns could be rooted in something more than mere abstractions. Perhaps Galileo was not so far off when he imagined the universe as a book written by God in the language of mathematics. The universe was software written by programmers in the binary language of code…“if you take this thesis to its conclusion, it doesn’t really explain anything about the universe or its origins. Presumably there is still some original basement-level reality at its foundation—there could be no true infinite regress—occupied by first posthumans who created the very first technological simulation. But these posthumans were just our descendants—or the descendants of some other species that had evolved on another planet—and so the question about origins remained unchanged, only pushed back one degree. Where did the universe originally come from?

Bohr …observed that humans are incapable of understanding the world beyond “our necessarily prejudiced conceptual frame.” And perhaps it can explain why the multiverse theory and other attempts to transcend our anthropocentric outlook so seem a form of bad faith, guilty of the very hubris they claim to reject. There is no Archimedean point, no purely objective vista that allows us to transcend our human interests and see the world from above, as we once imagined it appeared to God. It is our distinctive vantage that binds us to the world and sets the necessary limitations that are required to make sense of it. This is true, of course, regardless of which interpretation of physics is ultimately correct. It was Max Planck, the physicist who struggled more than any other pioneer of quantum theory to accept the loss of a purely objective worldview, who acknowledged that the central problems of physics have always been reflexive. “Science cannot solve the ultimate mystery of nature,” he wrote in 1932. “And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.

 

Wednesday, April 26, 2023

Is the mind a reliable mirror of reality? The marriage of physics and information theory

 This post is the fifth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 7 from the  fourth section of her book,  titled "Paradox."

Is the mind a reliable mirror of reality? Do the patterns we perceive belong to the objective world, or are they merely features of our subjective experience? Given that physics was founded on the separation of mind and matter, subject and object, it’s unsurprising that two irreconcilable positions that attempt to answer this question have emerged: one that favors subjectivity, the other objectivity. Bohr’s view was that quantum physics describes our subjective experience of the world; it can tell us only about what we observe. Mathematical equations like the wave function are merely metaphors that translate this bizarre world into the language of our perceptual interface—or, to borrow Kant’s analogy, spectacles that allow us to see the chaotic world in a way that makes sense to our human minds. Other interpretations of physics, like the multiverse theory or string theory, regard physics not as a language we invented but as a description of the real, objective world that exists out there, independent of us. Proponents of this view tend to view equations and physical laws as similarly transcendent, corresponding to literal, or perhaps even Platonic, realities.

The marriage of physics and information theory is often attributed to John Wheeler, the theoretical physicist who pioneered, with Bohr, the basic principles of nuclear fission. In the late 1980s, Wheeler realized that the quantum world behaved a lot like computer code. An electron collapsed into either a particle or a wave depending on how we interrogated it. This was not dissimilar from the way all messages can be simplified into “binary units,” or bits, which are represented by zeros and ones. Claude Shannon, the father of information theory, had defined information as “the resolution of uncertainty,” which seemed to mirror the way quantum systems existed as probabilities that collapsed into one of two states. For Wheeler these two fields were not merely analogous but ontologically identical. In 1989 he declared that “all things physical are information-theoretic in origin.            
            
In a way Wheeler was exploiting a rarely acknowledged problem that lies at the heart of physics: it’s uncertain what matter actually is. Materialism, it is often said, is not merely an ontology but a metaphysics—an attempt to describe the true nature of things. What materialism says about our world is that matter is all that exists: everything is made of it, and nothing exists outside of it. And yet, ask a physicist to describe an electron or a quark, and he will speak only of its properties, its position, its behavior—never its essence.

Wheeler’s answer was that matter itself does not exist. It is an illusion that arises from the mathematical structures that undergird everything, a cosmic form of information processing. Each time we make a measurement we are creating new information—we are, in a sense, creating reality itself. Wheeler called this the “participatory universe,” a term that is often misunderstood as having mystical “connotations, as though the mind has some kind of spooky ability to generate objects. But Wheeler did not even believe that consciousness existed. For him, the mind itself was nothing but information. When we interacted with the world, the code of our minds manipulated the code of the universe, so to speak. It was a purely quantitative process, the same sort of mathematical exchange that might take place between two machines.            

While this theory explains, or attempts to explain, how the mind is able to interact with matter, it is a somewhat evasive solution to the mind-body problem, a sleight of hand that discards the original dichotomy by positing a third substance—information—that can explain both. It is difficult, in fact, to do justice to how entangled and self-referential these two fields—information theory and physics—have become, especially when one considers their history. The reason that cybernetics privileged relationships over content in the first place was so that it could explain things like consciousness purely in terms of classical physics, which is limited to describing behavior but not essence—“doing” but not “being.” When Wheeler merged information theory with quantum physics, he was essentially closing the circle, proposing that the hole in the material worldview—intrinsic essence—could be explained by information itself.

Seth Lloyd, an MIT professor who specializes in quantum information, insists that the universe is not like a computer but is in fact a computer. “The universe is a physical system that contains and processes information in a systematic fashion,” he argues, “and that can do everything a computer can do.” Proponents of this view often point out that recent observational data seems to confirm it. Space-time, it turns out, is not smooth and continuous, as Einstein’s general relativity theory assumed, but more like a grid made up of minuscule bits—tiny grains of information that are not unlike the pixels of an enormous screen. Although we experience the world in three dimensions, it seems increasingly likely that all the information in the universe arises from a two-dimensional field, much like the way holograms work, or 3-D films.
            
When I say that I try very hard to avoid the speculative fringe of physics, this is more or less what I am talking about. The problem, though, is that once you’ve encountered these theories it is difficult to forget them, and the slightest provocation can pull you back in. It happened a couple years ago, while watching my teenage cousin play video games at a family gathering. I was relaxed and a little bored and began thinking about the landscape of the game, the trees and the mountains that made up the backdrop. The first-person perspective makes it seem like you’re immersed in a world that is holistic and complete, a landscape that extends far beyond the frame, though in truth each object is generated as needed. Move to the right and a tree is “generated; move to the left and a bridge appears, creating the illusion that it was there all along. What happened to these trees and rocks and mountains when the player wasn’t looking? They disappeared—or no, they were never there to begin with; they were just a line of code. Wasn’t this essentially how the observer effect worked? The world remained in limbo, a potentiality, until the observer appeared and it was compelled to generate something solid. Rizwan Virk, a video game programmer, notes that a core mantra in programming is “only render that which is being observed.”
            
Couldn’t the whole canon of quantum weirdness be explained by this logic? Software programs are never perfect. Programmers cut corners for efficiency—they are working, after all, with finite computing power; even the most detailed systems contain areas that are fuzzy, not fully sketched out. Maybe quantum indeterminacy simply reveals that we’ve reached the limits of the interface. The philosopher Slavoj Žižek once made a joke to this effect. Perhaps, he mused, God got a little lazy when he was creating the universe, like the video game programmer who doesn’t bother to meticulously work out the interior of a house that[ “the player is not meant to enter. “He stopped at a subatomic level,” he said, “because he thought humans would be too stupid to progress so far.”

Monday, April 24, 2023

Networks and Emergentism

This post is the fourth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 5 and 6 form the third section of her book,  titled "Network."

From Chapter 5:

When it comes to biological systems like forests and swarms, emergent behavior that appears to be unified and intelligent can exist without a centralized control system like a brain. But the theory has also been applied to the brain itself, as a way to account for human consciousness. Although most people tend to think of the brain as the body’s central processing unit, the organ itself has no central control. Philosophers and neuroscientists often point out that our belief in a unified interior self—the illusion, as Richard Dawkins once put it, that we are “a unit, not a colony”—has no basis in the actual architecture of the brain. Instead there are only millions of unconscious parts that conspire, much like a bee colony, to create a “system” that is intelligent. Emergentism often entails that consciousness isn’t just in the head; it emerges from the complex relationships that exist throughout the body, and also from the interactions between the body and its environment.

Although emergentism is rooted in physicalism, critics have often claimed that there is something inherently mystical about the theory, particularly when these higher-level patterns are said to be capable of controlling or directing physical processes...few emergentists have managed to articulate precisely what kind of structure might produce consciousness in machines; in some cases the mind is posited simply as a property of “complexity,” a term that is eminently vague. Some critics have argued that emergentism is just an updated version of vitalism—the ancient notion that the world is animated by a life force or energy that permeates all things…Although emergentism is focused specifically on consciousness, as opposed to life itself, the theory is vulnerable to the same criticism that has long haunted vitalism: it is an attempt to get “something from nothing.” It hypothesizes some additional, invisible power that exists within the mechanism, like a ghost in the machine.

…emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity. At some point nature will step in and finish the job…aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself.

From Chapter 6:

…once the world was a sacred and holy place, full of chatty and companionable objects—rocks and trees that were capable of communicating with us—we now live in a world that has been rendered mute… some disenchantment narratives place the fall from grace not with the Enlightenment and the rise of modern science but with the emergence of monotheism. The very notion of imago dei, with humanity created in God’s image and given “dominion” over creation, has linked human exceptionalism with the degradation of the natural world.  Is it possible to go back? Or are these narratives embedded so deeply in the DNA of our ontological assumptions that a return is impossible? This is especially difficult when it comes to our efforts to create life from ordinary matter…In the orthodox forms of Judaism and Christianity, the ability to summon life from inert matter is denounced as paganism, witchcraft, or idolatry.

Just as the golems were sculpted out of mud and animated with a magical incantation, so the hope persists that robots built from material parts will become inhabited by that divine breath…While these mystical overtones should not discredit emergence as such—it is a useful enough way to describe complex systems like beehives and climates—the notion that consciousness can emerge from machines does seem to be a form of wishful thinking, if only because digital technologies were built on the assumption that consciousness played no role in the process of intelligence. Just as it is somewhat fanciful to believe that science can explain consciousness when modern science itself was founded on the exclusion of the mind, it is difficult to believe that technologies designed specifically to elide the notion of the conscious subject could possibly come to develop an interior life.
           
To dismiss emergentism as sheer magic is to ignore the specific ways in which it differs from the folklore of the golems—even as it superficially satisfies the same desire. Scratch beneath the mystical surface and it becomes clear that emergentism is often not so different from the most reductive forms of materialism, particularly when it comes to the question of human consciousness. Plant intelligence has been called a form of “mindless mastery,” and most emergentists view humans as similarly mindless. We are not rational agents but an encasement of competing systems that lack any sort of unity or agency. Minsky once described the mind as “a sort of tangled-up bureaucracy” whose parts remain ignorant of one another.

Just as the intelligence of a beehive or a traffic jam resides in the patterns of these inert, intersecting parts, so human consciousness is merely the abstract relationships that emerge out of these systems: once you get to the lowest level of intelligence, you inevitably find, as Minsky put it, agents that “cannot think at all.” There is no place in this model for what we typically think of as interior experience, or the self.

Embodied artificial intelligence is being pursued in laboratories using humanoid robots equipped with sensors and cameras that endow the robots with sensory functions and motor skills. The theory is that these sensorimotor capacities will eventually lead to more advanced cognitive skills, such as a sense of self or the ability to use language, though so far this has not happened.
 

Friday, April 21, 2023

Equivalence of the metaphors of the major religions and transhumanism

This post is the third installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 3 and 4 form the second section of her book,  titled "Pattern."

From Chapter 3:

Once animal brains began to form, the information became encoded in neural patterns. Now that evolution has produced intelligent, tool-wielding humans, we are designing new information technologies more sophisticated than any object the world has yet seen. These technologies are becoming more complex and powerful each year, and very soon they will transcend us in intelligence. The ‘transhumanist’  movement believes that the only way for us to survive as humans is to begin merging our bodies with these technologies, transforming ourselves into a new species—what Kurzweil calls “posthumans,” or spiritual machines. Neural implants, mind-uploading, and nanotechnology will soon be available, he promises. With the help of these technologies, we will be able to transfer or “resurrect” our minds onto supercomputers, allowing us to become immortal. Our bodies will become incorruptible, immune to disease and decay, and each person will be able to choose a new, customizable virtual physique.

From Chapter 4:

…how is it that the computer metaphor—an analogy that was expressly designed to avoid the notion of a metaphysical soul - has returned to us ancient religious ideas about physical transcendence and the disembodied spirit?

In his book “You Are Not a Gadget”, the computer scientist Jaron Lanier argues that just as the Christian belief in an immanent Rapture often conditions disciples to accept certain ongoing realities on earth—persuading them to tolerate wars, environmental destruction, and social inequality—so too has the promise of a coming Singularity served to justify a technological culture that privileges information over human beings. “If you want to make the transition from the old religion, where you hope God will give you an afterlife,” Lanier writes, “to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive.” This sacralizing of information is evident in the growing number of social media platforms that view their human users as nothing more than receptacles of data. It is evident in the growing obsession with standardized testing in public schools, which is designed to make students look good to an algorithm. It is manifest in the emergence of crowd-sourced sites such as Wikipedia, in which individual human authorship is obscured so as to endow the content with the transcendent aura of a holy text. In the end, transhumanism and other techno-utopian ideas have served to advance what Lanier calls an “antihuman approach to computation,” a digital climate in which “bits are presented as if they were alive, while humans are transient fragments.

In a way we are already living the dualistic existence that Kurzweil promised. In addition to our physical bodies, there exists—somewhere in the ether—a second self that is purely informational and immaterial, a data set of our clicks, purchases, and likes that lingers not in some transcendent nirvana but rather in the shadowy dossiers “of third-party aggregators. These second selves are entirely without agency or consciousness; they have no preferences, no desires, no hopes or spiritual impulses, and yet in the purely informational sphere of big data, it is they, not we, that are most valuable and real.

He too found an “essential equivalence” between transhumanist metaphors and Christian metaphors: both systems of thought placed a premium value on consciousness. The nature of consciousness—as well as the question of who and what is conscious—is the fundamental philosophical question, he said, but it’s a question that cannot be answered by science alone. This is why we need metaphors.  “religion deals with legitimate questions but the major religions emerged in pre-scientific times so that the metaphors are pre-scientific. That the answers to existential questions are necessarily metaphoric is necessitated by the fact that we have to transcend mere matter and energy to find answers…The difference between so-called atheists and people who believe in “God” is a matter of the choice of metaphor, and we could not get through our lives without having to choose metaphors for transcendent questions.
           
Perhaps all these efforts—from the early Christians’ to the medieval alchemists’ to those of the luminaries of Silicon Valley—amounted to a singular historical quest, one that was expressed through analogies that were native to each era. Perhaps our limited vantage as humans meant that all we could hope for were metaphors of our own making, that we would continually grasp at the shadow of absolute truths without any hope of attainment.
 

Wednesday, April 19, 2023

The Illusion of the Self as Humans become Gods.

This post continues my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting.  This post deals with Chapter 2 from the first section of the book, 'Image.'  I’m discontinuing the experiment of including Chat GPT 4 condensations of the excerpts. Here are the clips:

It turns out that computers are particularly adept at the tasks that we humans find most difficult: crunching equations, solving logical propositions, and other modes of  abstract thought. What artificial intelligence finds most difficult are the sensory perceptive tasks and motor skills that we perform unconsciously: walking, drinking from a cup, seeing and feeling the world through our senses. Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.
            
If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought.”

“Metaphors are typically abandoned once they are proven to be insufficient. But in some cases, they become only more entrenched: the limits of the comparison come to redefine the concepts themselves. This latter tactic has been taken up by the eliminativists, philosophers who claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent”

The eliminativist philosophers claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent. Chief among the deniers is Daniel Dennett, who has often insisted that the mind is illusory. Dennett refers to the belief in interior experience derisively as the “Cartesian theater,” invoking the popular delusion—again, Descartes’s fault—that there exists in the brain some miniature perceiving entity, a homunculus that is watching the brain’s representations of the external world projected onto a movie screen and making decisions about future actions. One can see the problem with this analogy without any appeals to neurobiology: if there is a homunculus in my brain, then it must itself (if it is able to perceive) contain a still smaller homunculus in its head, and so on, in infinite regress.
            
            Dennett argues that the mind is just the brain and the brain is nothing but computation, unconscious all the way down. What we experience as introspection is merely an illusion, a made-up story that causes us to think we have “privileged access” to our thinking processes. But this illusion has no real connection to the mechanics of thought, and no ability to direct or control it. Some proponents of this view are so intent on avoiding the sloppy language of folk psychology that any any reference to human emotions and intentions is routinely put in scare quotes. We can speak of brains as “thinking,” “perceiving,” or “understanding” so long as it’s clear that these are metaphors for the mechanical processes. “The idea that, in addition to all of those, there is this extra special something—subjectivity—that distinguishes us from the zombie,” Dennett writes, “that’s an illusion.”

Perhaps it’s true that consciousness does not really exist—that, as Brooks put it, we “overanthropomorphize humans.” If I am capable of attributing life to all kinds of inanimate objects, then can’t I do the same to myself? In light of these theories, what does it mean to speak of one’s “self” at all?


Monday, April 17, 2023

Disenchantment of the world and the computational metaphors of our times.

I am doing a second reading of Meghan O’Gieblyn’s book “Gods, Humans, Animals, Machines” and extracting clips of text that I find most interesting.  I’m putting them in a MindBlog post, hoping they will be interesting to some readers, and also because MindBlog is my personal archive of things I want to remember I've engaged. At least I will know where to look up something I'm trying to recall.

 O’Gieblyn’s text has bursts of magisterial insight interspersed with details of her personal experiences and travails, and the clips try to capture my biased selection of the former. 

The first section of her book “Image”, has two Chapters, and this post passes on Chapter 1,  starting with the result of my asking ChatGPT 4 to summarize my clips in approximately 1000 words. It generated the ~300 words below, and I would urge you to continue reading my clips (976 words), which provide a more rich account. Subsequent posts in this series will omit Chat GPT summaries, unless they generate something that blows me away. 

Here is Chat GPT 4’s summary:

The concept of the soul has become meaningless in modern times, reduced to a dead metaphor that no longer holds any real significance. This is due to the process of disenchantment that has taken place since the dawn of modern science, which has turned the world into a subject of investigation and reduced everything to the causal mechanism of physical laws. This has led to a world that is devoid of the spirit-force that once infused and unified all living things, leaving us with an empty carapace of gears and levers. However, the questions that were once addressed by theologians and philosophers persist in conversations about digital technologies, where artificial intelligence and information technologies have absorbed them.  

Humans have a tendency to see themselves in all beings, as evidenced by our habit of attributing human-like qualities to inanimate objects. This has led to the development of the idea of God and the belief that natural events are signs of human agency. This impulse to see human intention and purpose in everything has resulted in a projection of our image onto the divine, which suggests that metaphors are two-way streets and that it is not always easy to distinguish between the source domain and the target.

The development of cybernetics and the application of the computational analogy to the mind has resulted in the description of the brain as the hardware that runs the software of the mind, with cognitive systems being spoken of as algorithms. However, the use of metaphors like these can lead to a limiting of our understanding of the world and how we interact with it. As cognitive linguist George Lakoff notes, when an analogy becomes ubiquitous, it can be difficult to think around it, and it structures how we think about the world.

And here are the text clips I asked ChatGPT 4 to summarize

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons

We live in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden.” In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”
            
If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden.

To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

Animism was built into our design. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. Our brains can’t fundamentally distinguish between interacting with people and interacting with devices. Our habit of seeing our image everywhere in the natural world is what gave birth to the idea of God. Early civilizations assumed that natural events bore signs of human agency. Earthquakes happened because the gods were angry. Famine and drought were evidence that the gods were punishing them. Because human communication is symbolic, humans were quick to regard the world as a system of signs, as though some higher being were seeking to communicate through “natural events. Even the suspicion that the world is ordered, or designed, speaks to this larger impulse to see human intention and human purpose in every last quirk of “creation.”
    
There is evidently no end to our solipsism. So deep is our self-regard that we projected our image onto the blank vault of heaven and called it divine. But this theory, if true, suggests a deeper truth: metaphors are two-way streets. It is not so easy to distinguish the source domain from the target, to remember which object is the original and which is modeled after its likeness. The logic can flow in either direction. For centuries we said we were made in God’s image, when in truth we made him in ours.”

Shannon removed the thinking mind from the concept of information. Meanwhile, McCulloch applied the logic of information processing to the mind itself. This resulted in a model of mind in which thought could be accounted for in purely abstract, mathematical terms, and opened up the possibility that computers could execute mental functions. If thinking was just information processing, computers could be said to “learn,” “reason,” and “understand”—words that were, at least in the beginning, put in quotation marks to denote them as metaphors. But as cybernetics evolved and the computational analogy was applied across a more expansive variety of biological and artificial systems, the limits of the metaphor began to dissolve, such that it became increasingly difficult to tell the difference between matter and form, medium and message, metaphor and reality. And it became especially difficult to explain aspects of the mind that could not be accounted for by the metaphor.

The brain is often described today as the hardware that “runs” the software of the mind. Cognitive systems are spoken of as algorithms: vision is an algorithm, and so are attention, language acquisition, and memory.
            
In 1999 the cognitive linguist George Lakoff noted that the analogy had become such a given that neuroscientists “commonly use the Neural Computation metaphor without noticing that it is a metaphor.” He found this concerning. Metaphors, after all, are not merely linguistic tools; they structure how we think about the world, and when “an analogy becomes ubiquitous, it is impossible to think around it. ..there is virtually no form of discourse about intelligent human behavior that proceeds without employing this metaphor, just as no form of discourse about intelligent human behavior could proceed in certain eras and cultures without reference to a spirit or deity.”

Wednesday, April 12, 2023

The Physics of Intelligence - and LDMs (Large Danger Models)

I want to pass on my abstracting of an interesting article by Venkatesh Rao, another instance of my using MindBlog as my personal filing system to be sure I can come back to - and refresh my recall of - ideas I think are important.  I also pass on ChatGPT 3.5 and ChatGPT 4's summaries of my summary!

The Physics of Intelligence   -  The missing discourse of AI

There are strong philosophy and engineering discourses, but no physics discourse. This is a problem because when engineers mainline philosophy questions in engineering frames without the moderating influence of physics frames, you get crackpottery…I did not say the physics of artificial intelligence…The physics of intelligence is no more about silicon semiconductors or neurotransmitters than the physics of flight is about feathers or aluminum.

Attention is the focus of one of the six basic questions about the physics of intelligence that I’ve been thinking about. Here is my full list:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?
 

The first three are obviously at the “physics of intelligence” level of abstraction, just as “wing” is at the “physics of flight” level of abstraction. The last three get more abstract, and require some constraining, but there are already some good ideas floating around on how to do the constraining…We are not talking about the physics of computation in general…computation and intelligence are not synonymous or co-extensive…To talk about intelligence, it is necessary, but not sufficient, to talk about computation. You also have to talk about the main aspects of embodiment: spatial and temporal extent, materiality, bodily integrity maintenance in relation to environmental forces, and thermodynamic boundary conditions.
 

What is attention, and how does it work?

A computer is “paying attention” to the data and instructions in the CPU registers in any given clock cycle…but fundamentally, attention is not a design variable used in complex ways in basic computing. You could say AI begins when you start deploying computational attention in a more dynamic way.

Attention is to intelligence as wing is to flight. The natural and artificial variants have the same sort of loose similarity. Enough that using the same word to talk about both is justified…In AI, attention refers primarily to a scheme of position encoding of a data stream. Transformer models like GPT keep track of the position of each token in the input and output streams, and extract meaning out of it. Where a word is in a stream matters almost as much as what the word is.

You can interpret these mechanisms as attention in a human sense. What is in the context of a text? In text streams, physical proximity (tokens before and after), syntactic proximity (relationship among clauses and words in a formal grammatical sense) and semantic proximity (in the sense of some words, including very distant ones, being significant in the interpretation of others) all combine to create context. This is not that different from how humans process text. So at least to first order, attention in human and artificial systems is quite similar.

But as with wings, the differences matter. Human attention, arguably, is not primarily about information processing at all. It is about energy management. We pay attention to steady our energy into a steady, sustainable, and renewable flow. We engage in low-information activities like meditation, ritual, certain kinds of art, and prayer to learn to govern our attention in specific ways. This is not to make us better at information processing, but for other reasons, such as emotion regulation and motivation. Things like dopamine loops of satisfaction are involved. The use of well-trained attention for better information processing is only one of the effects.

Overall, human attention is more complex and multifaceted than AI attention, just as bird wings are fundamentally more complex mechanically. Attention in the sense of position-encoding for information processing is like the pure lift function of a wing. Human attention, in addition, serves additional functions analogous to control and propulsion type functions.

What role does memory play in intelligence?

The idea of attention leads naturally to the idea of memory. Trivially, memory is a curated record of everything you’ve paid attention to in the past…An obvious way to understand current developments in AI is to think of LLMs and LIMs as idiosyncratically compressed atemporal textual and visual memories. Multimodal models can be interpreted as generalizations of this idea.

Human memory is modulated by evolving risk perceptions as it is laid down, and emotions triggered by existing memories regulates current attention, determining what new data gets integrated into the evolving model (as an aside, this is why human memory exists as a kind of evolving coherent narrative of self, rather than just as a pile of digested stuff).

Biological organism memory is not just an undifferentiated garbage record (LGM) of what you paid attention to in the past; it shapes what you pay attention to in the future very directly and continuously. Biological memory is strongly opinionated memory. If a dog bites you…you can’t afford to separate training and inference in the sense of “training” on a thousand dog encounters…you have to use your encounter with Dog 1 to shape your attentional response to Dog 2. Human memories are like LGMs, except that the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. There’s a term for this in psychology (predictive coding or predictive processing) with a hand-wavy theory of energy-minimization attached, but I don’t find it fully satisfying.

I have a placeholder name for this scheme, but as yet it’s not very fleshed out. Biological memories are Large Danger Models (LDMs).

Why just danger? Why not other signals and drives towards feeding, sex, interestingness, poetry, and so on? I have a stronger suspicion that danger is all you need to generate human-like memory, and in particular human-like experience of time. Human memory is the result of playing to continue the game, ie an infinite-game orientation. Once you have that in place, everything else emerges. It’s not as fundamental as basic survival.

AIs don’t yet count as human-equivalent to me: they’re in no danger, ever. Since we’re in the brute-force stage of training AI models, we train them on basically everything we have, with no danger signal accompanying any of it…AIs today develop their digested memories with no ongoing encoding or capture of the evolving risk and emotion assessments that modulate human memories. Even human grade schools, terrible as they are, do a better job than AI training protocols…the next big leap should be achievable by encoding some kind of environmental risk signal. Ie, we just need to put AIs in danger in the right way. My speculative idea of LDMs don’t seem that mysterious. LDMs are an engineering moonshot, not a physics mystery.

To lay it out more clearly, consider a thought experiment...Suppose you put a bunch of AIs in robot bodies, and let them evolve naturally, while scrounging resources for survival. To keep it simple, let’s say they only compete over scarce power outlets to charge their batteries. Their only hardcoded survival behavior is to plug in when running low….Let’s say the robots are randomly initialized to pay attention differently to different aspects of data coursing through them. Some robots pay extra attention to other robots’ actions. Other robots pay extra attention to the rocks in the environment. Obviously, the ones that happen to pay attention in the right ways will end up outcompeting the ones who don’t. The sneaky robots will evolve to be awake when other robots are powered down or hibernating for self-care, and hog the power outlets then. The bigger robots will learn they can always get the power outlets by shoving the smaller ones out of the way.

Now the question is: given all the multimodal data flowing through them, what will the robots choose to actually remember in their limited storage spaces, as their starter models get trained up? What sorts of LDMs will emerge? How will the risk modulation emerge? What equivalent of emotional regulation will emerge? What sense of time will emerge?

The thought experiment of LDMs suggests a particular understanding of memory in relation to intelligence: memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices....It’s a bit of a mouthful, but I think that’s fundamentally it.

I suspect the next generation of AI models will include some such embodiment feedback loop so memory goes from being brute-force generic persistence to persistence that’s linked to embodied behaviors in a feedback loop exposed to environmental dangers that act as survival pressures.

The resulting AIs won’t be eidetic idiot savants, and less capable in some ways, but will be able to survive in environments more dangerous than datacenters exposed to the world only through sanitized network connections. Instead of being Large Garbage Models (LGMs), they’ll be Large Danger Models (LDMs).
 

How is intelligence related to information?
 

We generally think about information as either primitive (you just have to know it) or entailed (you can infer it from what you already know)…Primitive information is a kind of dehydrated substance to which you can add compute (water) to expand it. Entailed information can be dehydrated into primitive form. Compression of various sorts exploits different ways of making the primitive/entailed distinction.

When you think of intelligence in relation to information though, you have to get more sophisticated…We think in terms of whether or not new data patterns require new categories, or simply modify the strengths of, and interrelationships among, existing ones…are you looking at something new, or is this just a variant or instance of something you already know?

Information for an intelligent system them, is best understood in an ontological novelty way rather than an information-theoretic way. Because it is not as fundamental an abstraction level, it is more arbitrary, which means how you factor your experience stream into categories is as much a function of the idiosyncrasies of your survival mode as it is a function of the bits-and-bytes level structure of what you’re experiencing…the models are making up weird compressions that are not human-understandable. That’s what information is to an intelligent system: efficient ontologies that reflect how that particular intelligence is situated in its environment.

Or to put it more plainly: information is the hallucinations an intelligence makes up to categorize reality compactly, in order to survive efficiently.

The hallucinations that LLMs make up — papers and citations that don’t exist, weird words that show up in images, strange new symbols, inscrutable compressions — are neither errors in truth-seeking, nor evidence of weird kinds of magic going on. It is just raw information (in the bits and bytes Shannon sense) that has been judged for salience to the basic challenge of survival, and represented accordingly, in evolving memory. It is ontologically compressed experiential data. It’s just extra weird because the models are evolving in a trivial non-dangerous environment. So distinctions between robustly real categories and fragile hallucinations don’t matter. An academic can get fired for making up a citation. An LLM just gets wonder and mystique.

This is why information comes after memory in my physics of intelligence. Your memories are a function of a real-time danger management behavior. Your emergent categories of cognition did not evolve to be true in any rigorous sense. They evolved to help you survive. If someone who names and classifies certain data patterns as “ghosts” survives better than someone who names and classifies them “paranoid projections,” that’s how it goes. Truth is, in a evolutionary sense, the most robust representation of experience that helps you survive efficiently in the most challenging environments.

Models that describe reality in a broad and interesting way emerge when the danger environment tests you enough that you can’t bullshit to survive, but not so much that you must bullshit to survive. Both disconnected daydreams and psychotic breaks are poor foundations for existing in reality.

Spacetime, Matter, and Energy NOTE:  I am not including clips from this section in this post.
 

Here is ChatGPT 3.5's response to my request to summarize the above in 1000 words:

The article discusses the importance of including a physics discourse in the field of AI. The author proposes six basic questions about the physics of intelligence: attention, memory, information, spacetime, matter, and energy and thermodynamics. The concept of attention is related to position encoding of a data stream, and while it has some similarities between human and artificial systems, the differences are significant. Human attention is about energy management and regulation, and it is multifaceted. Memory is a curated record of everything a person has paid attention to in the past, but it is also modulated by evolving risk perceptions and emotions triggered by existing memories. Biological memory is strongly opinionated memory, and the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. The author suggests that biological memories are Large Danger Models (LDMs) and that danger is all that is needed to generate human-like memory and experience of time.

And here is ChatGPT 4's response, slightly edited to remove one obvious glitch:

The Physics of Intelligence: The Missing Discourse of AI

The discourse on artificial intelligence (AI) is currently dominated by philosophy and engineering but lacks a physics perspective. To truly understand intelligence, it is necessary to consider its physical aspects, such as spatial and temporal extent, materiality, and thermodynamics.

Six questions about the physics of intelligence are proposed:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?

Attention is fundamental to intelligence. In AI, attention refers to the position encoding of a data stream. While human and artificial attention mechanisms are similar, human attention is more complex and multifaceted, serving additional functions related to energy management, motivation, and emotion regulation.

Memory is the curated record of everything one has paid attention to in the past. Current AI models, such as LLMs (Language Models) and LIMs (Image Models), can be seen as compressed textual and visual memories. Human memory, however, is modulated by risk perceptions, emotions, and experiences. The idea of Large Danger Models (LDMs) is proposed, where memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices. The next generation of AI models may include embodiment feedback loops that link memory to embodied behaviors in a feedback loop exposed to environmental dangers, resulting in more capable AIs.

Intelligence is related to information through efficient ontologies that reflect how an intelligence is situated in its environment. For intelligent systems, information is best understood in terms of ontological novelty rather than information-theoretic ways. Information is the hallucinations an intelligence makes up to categorize reality compactly to survive efficiently. This is why information comes after memory in the physics of intelligence, as memories are a function of real-time danger management behavior. Truth, in an evolutionary sense, is the most robust representation of experience that helps an organism survive efficiently in challenging environments.

The physics of intelligence is not solely about computation but also involves understanding the physical aspects of intelligence, such as attention, memory, and information. Future AI models may need to incorporate embodiment feedback loops and risk modulation to develop more human-like intelligence.