Wednesday, November 29, 2023

Meta-Learned Models of Cognition

I pass on the text of a recent email from Behavioral and Brain Sciences inviting commentary on an article by Binz et al.  I am beginning to plow through the interesting text and figures - and will mention that motivated readers can obtain a PDF of the article from me.

Target Article: Meta-Learned Models of Cognition

Authors: Marcel Binz, Ishita Dasgupta, Akshay K. Jagadish, Matthew Botvinick, Jane X. Wang, and Eric Schulz

Deadline for Commentary Proposals: Wednesday, December 20, 2023

Abstract: Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

Keywords: meta-learning, rational analysis, Bayesian inference, cognitive modeling, neural networks

Monday, November 27, 2023

The feasibility of artificial consciousness through the lens of neuroscience

Some interesting perspectives from Aru, Larkum, and Shine in Trends in Neurosciences. Motivated readers can obtain a copy of the article's text from me.  

Highlights

Large language models (LLMs) can produce text that leaves the impression that one may be interacting with a conscious agent.
Present-day LLMs are text-centric, whereas the phenomenological umwelt of living organisms is multifaceted and integrated.
Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing. Currently, such processes are not implemented in LLMs.
The organizational complexity of living systems has no parallel in present-day AI tools. Possibly, AI systems would have to capture this biological complexity to be considered conscious.
LLMs and the current debates on conscious machines provide an opportunity to re-examine some core ideas of the science of consciousness.
Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
 
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
 
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Sunday, November 19, 2023

AI is infusing our daily lives. Is it an existential threat?

As AI is being subtly melded into our daily lives by the Google, Apple, and Microsoft clouds there is continuing debate over whether it is or Isn’t an existential threat. This debate has been brought to a head by the sudden and unexpected firing of Open AI’s chief executive Sam Altman and has starkly exposed the philosophical rift in the AI community between those who are more or less concerned about the potential risks of its untrammeled development. How many, if any, guardrails should be in place?. I have been on the techno-optimists side of this debate and passed on Marc Andreesen's rather extreme and overstated SubStack post titled "Why AI Will Save the World" in a June 23 MindBlog post. I was then completely horrified by his subsequent piece by this ideologist of the Silicon Valley elite titled "Fighting", which took a very positive view of Elon Musk challenging Mark Zuckerbert to a cage fight. I was going to write a piece on his dystopian future world, but two NYTimes pieces have done a better job at it than I could. They are Elizabeth Spiers "A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World" and Ezra Klein's "The Chief Ideologist of the Silicon Valley Elite Has Some Strange Ideas".

Friday, November 10, 2023

Personalized chatbots and wearable AI devices for melding AI with our everyday lives

Personalized GPTs are here, but apparently not yet at the level of consumers like myself who don't want to do coding. I've been looking into making my own personal chatbot whose training data is 50 MB of my personal journal of about 20,000 pages. I would then like to query and chat with the resulting chatbot about events in my life since 1980. If anyone reading this has some advice, let me know!

Below are some URLs to my morning's reading:

These links are to articles from Kevin Roose, NYTimes tech columnist:  

Personalized A.I. Agents Are Here. Is the World Ready for Them?

Kevin Roose Hard Fork podcast (transcript available) on this and other AI topics 

And this article is on AI devices seeking to cut into smartphone dominance 

It points to two such devices: The Humane Ai Pin and The Rewind AI Pendant 

Wednesday, November 08, 2023

Self-Organization: Nature's Intelligent Design

I was to pass on this link to a talk by Clint Sprott, who initiated the Chaos and Complexity Discssion group that I was an active member of before leaving Madison WI. I remain on their email listserve. The slides for the talk can be downloaded here. Talk description:
Clint Sprott, Professor Emeritus, Department of Physics, UW-Madison, explains that although we believe that complex patterns must have a complex cause, patterns may spontaneously arise. This self-organization which occurs in nature can be described with simple computer models that replicate the features of the patterns.

Monday, November 06, 2023

Visual event segmentation alters higher-level thought.

An interesting piece of work from Ongchoco et al.:  

Significance

Numbers can be unexpectedly powerful. Suppose you must provide the last two digits of your social security number (SSN), after which you are asked how much you are willing to pay for a bottle of wine. Though your SSN digits are irrelevant to your valuation judgment, they nonetheless influence numerical estimates, such that lower SSN digits lead to lower valuations and higher SSN digits, to higher valuations. Such “anchoring” effects are extremely prevalent and powerful. Here, we demonstrate how a feature of low-level visual perception—the spontaneous segmentation of experience at event boundaries, as when you simply walk through a doorway—can restrict or even eliminate anchoring effects across economic valuations, factual questions, and legal judgments.
Abstract
Research on higher-level thought has revealed many principles of reasoning and decision-making but has rarely made contact with how we perceive the world in the first place. Here we show how a lower-level property of perception—the spontaneous and task-irrelevant segmentation of continuous visual stimulation into discrete events—can restrict one of the most notorious biases in decision-making: numerical anchoring. Subjects walked down a long room in an immersive three dimensional (3D) animation and then made a numerical judgment (e.g., of how much a suitcase is worth, or of how many hours of community service a minor crime deserved). Critically, some subjects passed through a doorway (a visual event boundary) during their virtual walk, while others did not—equating time, distance traveled, and visual complexity. The anchoring manipulation was especially innocuous, not appearing to be part of the experiment at all. Before the online trial began, subjects reported the two-digit numerical value from a visually distorted “CAPTCHA” (“to verify that you are human”)—where this task-irrelevant anchor was either low (e.g., 29) or high (e.g., 92). With no doorway, we observed reliable anchoring effects: Higher CAPTCHA values produced higher estimates. With the doorway, however, such effects were attenuated or even eliminated. This generalized across tasks involving item valuations, factual questions, and legal judgments and in tests of both incidental and explicit anchoring. This demonstrates how spontaneous visual event segmentation can have profound consequences for higher-level thought.

Friday, November 03, 2023

How our genes support our illusory selves - the "Baldwin effect"

I have been thinking about giving more explanation of a paragraph from my Oct. 25 post, A Materialist's Credo:
Modern neuroscience has proved that our experienced perceptions of sensing and acting are these predictions. They are fantasies, or illusions, as is our sense of having a self with agency that experiences value, purpose, and meaning. Everything we do and experience is in the service of reducing surprises by fulling these fantasies. An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language.

The sentence "An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language." is invoking what is called "The Baldwin Effect" or Baldwin hypothesis. 

I decided to turn to Chat GPT-4 for some help.

Deric:  

What is the "Baldwin effect" in evolutionary biology, particularly with respect to human evolution?

ChatGPT: 

The "Baldwin effect" describes a mechanism by which learned behaviors can eventually become inherited traits through the process of natural selection. Named after the psychologist James Mark Baldwin, who proposed the idea in the late 19th century, this concept bridges the gap between phenotypic plasticity (the ability of an organism to change its phenotype in response to changes in the environment) and genetic evolution.

Here's a simplified overview of how the Baldwin effect works:
    1    Initial Variation: Within a population, there are variations in behavior. Some individuals can learn a new beneficial behavior in response to a change in the environment, while others cannot.
    2    Learning as a Temporary Solution: The ability to learn this behavior offers a short-term advantage. Those who can learn the new behavior have a better chance of surviving and reproducing.
    3    Genetic Fixation Over Time: Over generations, there can be genetic changes that make this learned behavior easier to acquire or even innate. As those with a predisposition to the behavior have a better survival and reproductive rate, genes that support the behavior become more common.
    4    Reduced Reliance on Learning: As the behavior becomes more genetically rooted, the population may become less reliant on learning the behavior from scratch. Eventually, what was once a learned behavior can manifest itself as an instinctual behavior.

With respect to human evolution, one could theorize the Baldwin effect playing a role in various capacities. For example, our ancestors might have learned specific skills, like tool use or certain social behaviors, that provided survival advantages. Over time, genetic predispositions towards these skills could have developed, leading to humans having innate tendencies or quicker learning capacities in these areas.

However, it's essential to note that proving the Baldwin effect in human evolution is challenging due to the intricacies of genetic and cultural evolution. Still, it offers an exciting perspective on the interplay between learning and genetics.

Deric: 

How might the Baldwin effect hypothesis be applied to the human invention of an illusory "I" or "self" capable of forming social groups, whose cohesion is reinforced by emotional behaviors driven by evolved neuroendocrine mechanisms?

ChatGPT:

The concept of an illusory "I" or "self" is deeply rooted in both philosophical discourses and cognitive science studies. When considered in the context of the Baldwin effect, the evolution of such a self-concept could be framed as an adaptive trait in the intricate web of human social structures. Here's one way to interpret this:   

    1    Initial Variation and Learning: Early human communities might have had members who had stronger self-awareness or more robust personal identities than others. Such individuals might have been better at recognizing their roles within groups, remembering past interactions, and predicting future interactions. This self-awareness might initially emerge as a learned or nurtured cognitive trait.
    2    Social Advantages: Individuals with a pronounced sense of self could have been more adept at forming and maintaining social bonds, making them crucial members of their communities. They might have been better at understanding social hierarchies, reciprocity, and the nuances of cooperation, which are essential for the survival and success of early human societies.
    3    Emotional Behaviors and Neuroendocrine Mechanisms: As these individuals navigated their social landscapes, their brains could have evolved neuroendocrine mechanisms to support the complex emotions related to social interactions—empathy, jealousy, gratitude, shame, etc. These emotions, in turn, would reinforce the cohesion and stability of social groups, ensuring the group's collective survival and success.
    4    Genetic Fixation Over Time: Over generations, the capacity for a strong self-concept and the accompanying emotional depth might become increasingly genetically ingrained. This evolution would mean that newer generations would be born with an innate predisposition towards these traits, reducing the learning curve required to acquire them.
    5    Cultural Evolution: Parallel to the genetic fixation, cultural mechanisms might evolve to further reinforce the idea of the self and its importance in the society—through stories, traditions, rituals, etc. The interplay between genetic predispositions and cultural learning might lead to the establishment of the 'self' as a foundational concept in human societies.

In this context, the Baldwin effect would suggest that our ancestors' learned behavior—forming a concept of self for social advantages—over time, became a genetically supported predisposition. This evolution would mean that the complex cognitive and emotional behaviors supporting a strong self-concept became more instinctual and less reliant on individual learning.

Wednesday, November 01, 2023

David Brooks on "What is a Person?"


David Brooks has an astounding ability to simplify and present important ideas. I pass on a few clips from Chapter 5  - titled "What is a Person?" - of his new book  “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen.”  In chapter 9 he offers an equally lucid presentation of work in the cognitive sciences by Gibson and Proffitt showing how people in different life circumstances literally see different worlds. I've enjoyed reading this book and recommend that you read it. 

As we try to understand other people, we want to be constantly asking ourselves: How are they perceiving this situation? How are they experiencing this moment? How are they constructing their reality?
Let me dip briefly into brain science to try to show you how radical this process of construction is. Let’s take an example as simple as the act of looking around a room. It doesn’t feel like you're creating anything. It feels like you're taking in what’s objectively out there. You open your eyes. Light waves flood in. Your brain records what you see: a chair, a painting, a dust bunny on the floor. It feels like one of those old-fashioned cameras—the shutter opens and light floods in and gets recorded on the film
But this is not how perception really works. Your brain is locked in the dark, bony vault of your skull. Its job is to try to make sense of the world given the very limited amount of information that makes it into your retinas, through the optic nerves, and onto the integrative layer of the visual cortex. Your senses give you a poor-quality, low-resolution snapshot of the world, and your brain is then forced to take that and construct a high-definition, feature-length movie.
To do that, your visual system constructs the world by taking what you already know and applying it to the scene in front of you. Your mind is continually asking itself questions like “What is this similar to?” and “Last time I was in this situation, what did I see next?” Your mind projects out a series of models of what it expects to see. Then the eyes check in to report back about whether they are seeing what the mind expected. In other words, seeing is not a passive process of receiving data; it’s an active process of prediction and correction.
Perception, the neuroscientist Anil Seth writes, is “a generative, creative act.” It is “an action-oriented construction, rather than a passive registration of an objective external reality.” Or as the neuroscientist Lisa Feldman Barrett notes, “Scientific evidence shows that what we see, hear, touch, taste, and smell are largely simulations of the world, not reactions to it.” Most of us non-neuroscientists are not aware of all this constructive activity, because it happens unconsciously, It's as if the brain is composing vast, complex Proustian novels, and to the conscious mind it feels like no work at all
Social psychologists take a wicked delight in exposing the flaws of this prediction-correction way of seeing. They do this by introducing things into a scene that we don’t predict will be there and therefore don’t see. You probably know about the invisible gorilla experiment. Re- searchers present subjects with a video of a group of people moving around passing a basketball and ask the subjects to count the number of passes by the team wearing white. After the video, the researchers ask, “Did you see the gorilla?” Roughly half the research subjects have no idea what the researchers are talking about. But when they view the video a second time, with the concept “gorilla” now in their heads, they are stunned to see that a man in a gorilla suit had strolled right into the circle, stood there for a few seconds, and then walked out. They didn’t see it before because they didn’t predict “gorilla.”
In my favorite experiment of this sort, a researcher asks a student for directions to a particular place on a college campus. The student starts giving directions. Then a couple of “workmen”—actually, two other researchers— rudely carry a door between the directions asker and the directions giver. As the door passes between them, the directions asker surreptitiously trades places with one of the workmen. After the door has passed, the directions giver finds himself giving directions to an entirely different human being. And the majority of these directions givers don’t notice. They just keep on giving directions. We don’t expect one human being to magically turn into another, and therefore we don't see it when it happens.
In 1951 there was a particularly brutal football game between Dartmouth and Princeton. Afterward, fans of both teams were furious because, they felt, the opposing team had been so vicious. When psychologists had students rewatch a film of the game in a calmer setting, the students still fervently believed that the other side had committed twice as many penalties as their own side. When challenged about their biases, both sides pointed to the game film as objective proof that their side was right. As the psychologists researching this phenomenon, Albert Hastorf and Hadley Cantril, put it, “The data here indicate that there is no such ‘thing’as a ‘game’ existing ‘out there’ in its own right which people merely ‘observe’ The ‘game’ ‘exists’ for a person and is experienced by him only insofar as certain things have significances in terms of his purpose.” The students from the different schools constructed two different games depending on what they wanted to see. Or as the psychiatrist Iain McGilchrist puts it, “The model we choose to use to understand something determines what we find.”
Researchers like exposing the flaws in our way of seeing, but I’m constantly amazed at how brilliant the human mind is at constructing a rich, beautiful world. For example, in normal conversation, people often slur and mispronounce words. If you heard each word someone said in isolation, you wouldn't be able to understand 50 percent of them. But because your mind is so good at predicting what words probably should be in what sentence, you can easily create a coherent flow of meaning from other people's talk.
The universe is a drab, silent, colorless place. I mean this quite literally. There is no such thing as color and sound in the universe; it’s just a bunch of waves and particles. But because we have creative minds, we perceive sound and music, tastes and smells, color and beauty, awe and wonder. All that stuff is in here in your mind, not out there in the universe.
I've taken this dip into neuroscience to give the briefest sense of just how much creative artistry every person is performing every second of the day. And if your mind has to do a lot of con- structive workin order for you to see the physical objects in front of you, imagine how much work it has to undertake to construct your identity, your life story, your belief system, your ideals. There are roughly eight billion people on Earth, and each one of them sees the world in their own unique, never-to-be-repeated way.