Wednesday, April 12, 2023

The Physics of Intelligence - and LDMs (Large Danger Models)

I want to pass on my abstracting of an interesting article by Venkatesh Rao, another instance of my using MindBlog as my personal filing system to be sure I can come back to - and refresh my recall of - ideas I think are important.  I also pass on ChatGPT 3.5 and ChatGPT 4's summaries of my summary!

The Physics of Intelligence   -  The missing discourse of AI

There are strong philosophy and engineering discourses, but no physics discourse. This is a problem because when engineers mainline philosophy questions in engineering frames without the moderating influence of physics frames, you get crackpottery…I did not say the physics of artificial intelligence…The physics of intelligence is no more about silicon semiconductors or neurotransmitters than the physics of flight is about feathers or aluminum.

Attention is the focus of one of the six basic questions about the physics of intelligence that I’ve been thinking about. Here is my full list:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?
 

The first three are obviously at the “physics of intelligence” level of abstraction, just as “wing” is at the “physics of flight” level of abstraction. The last three get more abstract, and require some constraining, but there are already some good ideas floating around on how to do the constraining…We are not talking about the physics of computation in general…computation and intelligence are not synonymous or co-extensive…To talk about intelligence, it is necessary, but not sufficient, to talk about computation. You also have to talk about the main aspects of embodiment: spatial and temporal extent, materiality, bodily integrity maintenance in relation to environmental forces, and thermodynamic boundary conditions.
 

What is attention, and how does it work?

A computer is “paying attention” to the data and instructions in the CPU registers in any given clock cycle…but fundamentally, attention is not a design variable used in complex ways in basic computing. You could say AI begins when you start deploying computational attention in a more dynamic way.

Attention is to intelligence as wing is to flight. The natural and artificial variants have the same sort of loose similarity. Enough that using the same word to talk about both is justified…In AI, attention refers primarily to a scheme of position encoding of a data stream. Transformer models like GPT keep track of the position of each token in the input and output streams, and extract meaning out of it. Where a word is in a stream matters almost as much as what the word is.

You can interpret these mechanisms as attention in a human sense. What is in the context of a text? In text streams, physical proximity (tokens before and after), syntactic proximity (relationship among clauses and words in a formal grammatical sense) and semantic proximity (in the sense of some words, including very distant ones, being significant in the interpretation of others) all combine to create context. This is not that different from how humans process text. So at least to first order, attention in human and artificial systems is quite similar.

But as with wings, the differences matter. Human attention, arguably, is not primarily about information processing at all. It is about energy management. We pay attention to steady our energy into a steady, sustainable, and renewable flow. We engage in low-information activities like meditation, ritual, certain kinds of art, and prayer to learn to govern our attention in specific ways. This is not to make us better at information processing, but for other reasons, such as emotion regulation and motivation. Things like dopamine loops of satisfaction are involved. The use of well-trained attention for better information processing is only one of the effects.

Overall, human attention is more complex and multifaceted than AI attention, just as bird wings are fundamentally more complex mechanically. Attention in the sense of position-encoding for information processing is like the pure lift function of a wing. Human attention, in addition, serves additional functions analogous to control and propulsion type functions.

What role does memory play in intelligence?

The idea of attention leads naturally to the idea of memory. Trivially, memory is a curated record of everything you’ve paid attention to in the past…An obvious way to understand current developments in AI is to think of LLMs and LIMs as idiosyncratically compressed atemporal textual and visual memories. Multimodal models can be interpreted as generalizations of this idea.

Human memory is modulated by evolving risk perceptions as it is laid down, and emotions triggered by existing memories regulates current attention, determining what new data gets integrated into the evolving model (as an aside, this is why human memory exists as a kind of evolving coherent narrative of self, rather than just as a pile of digested stuff).

Biological organism memory is not just an undifferentiated garbage record (LGM) of what you paid attention to in the past; it shapes what you pay attention to in the future very directly and continuously. Biological memory is strongly opinionated memory. If a dog bites you…you can’t afford to separate training and inference in the sense of “training” on a thousand dog encounters…you have to use your encounter with Dog 1 to shape your attentional response to Dog 2. Human memories are like LGMs, except that the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. There’s a term for this in psychology (predictive coding or predictive processing) with a hand-wavy theory of energy-minimization attached, but I don’t find it fully satisfying.

I have a placeholder name for this scheme, but as yet it’s not very fleshed out. Biological memories are Large Danger Models (LDMs).

Why just danger? Why not other signals and drives towards feeding, sex, interestingness, poetry, and so on? I have a stronger suspicion that danger is all you need to generate human-like memory, and in particular human-like experience of time. Human memory is the result of playing to continue the game, ie an infinite-game orientation. Once you have that in place, everything else emerges. It’s not as fundamental as basic survival.

AIs don’t yet count as human-equivalent to me: they’re in no danger, ever. Since we’re in the brute-force stage of training AI models, we train them on basically everything we have, with no danger signal accompanying any of it…AIs today develop their digested memories with no ongoing encoding or capture of the evolving risk and emotion assessments that modulate human memories. Even human grade schools, terrible as they are, do a better job than AI training protocols…the next big leap should be achievable by encoding some kind of environmental risk signal. Ie, we just need to put AIs in danger in the right way. My speculative idea of LDMs don’t seem that mysterious. LDMs are an engineering moonshot, not a physics mystery.

To lay it out more clearly, consider a thought experiment...Suppose you put a bunch of AIs in robot bodies, and let them evolve naturally, while scrounging resources for survival. To keep it simple, let’s say they only compete over scarce power outlets to charge their batteries. Their only hardcoded survival behavior is to plug in when running low….Let’s say the robots are randomly initialized to pay attention differently to different aspects of data coursing through them. Some robots pay extra attention to other robots’ actions. Other robots pay extra attention to the rocks in the environment. Obviously, the ones that happen to pay attention in the right ways will end up outcompeting the ones who don’t. The sneaky robots will evolve to be awake when other robots are powered down or hibernating for self-care, and hog the power outlets then. The bigger robots will learn they can always get the power outlets by shoving the smaller ones out of the way.

Now the question is: given all the multimodal data flowing through them, what will the robots choose to actually remember in their limited storage spaces, as their starter models get trained up? What sorts of LDMs will emerge? How will the risk modulation emerge? What equivalent of emotional regulation will emerge? What sense of time will emerge?

The thought experiment of LDMs suggests a particular understanding of memory in relation to intelligence: memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices....It’s a bit of a mouthful, but I think that’s fundamentally it.

I suspect the next generation of AI models will include some such embodiment feedback loop so memory goes from being brute-force generic persistence to persistence that’s linked to embodied behaviors in a feedback loop exposed to environmental dangers that act as survival pressures.

The resulting AIs won’t be eidetic idiot savants, and less capable in some ways, but will be able to survive in environments more dangerous than datacenters exposed to the world only through sanitized network connections. Instead of being Large Garbage Models (LGMs), they’ll be Large Danger Models (LDMs).
 

How is intelligence related to information?
 

We generally think about information as either primitive (you just have to know it) or entailed (you can infer it from what you already know)…Primitive information is a kind of dehydrated substance to which you can add compute (water) to expand it. Entailed information can be dehydrated into primitive form. Compression of various sorts exploits different ways of making the primitive/entailed distinction.

When you think of intelligence in relation to information though, you have to get more sophisticated…We think in terms of whether or not new data patterns require new categories, or simply modify the strengths of, and interrelationships among, existing ones…are you looking at something new, or is this just a variant or instance of something you already know?

Information for an intelligent system them, is best understood in an ontological novelty way rather than an information-theoretic way. Because it is not as fundamental an abstraction level, it is more arbitrary, which means how you factor your experience stream into categories is as much a function of the idiosyncrasies of your survival mode as it is a function of the bits-and-bytes level structure of what you’re experiencing…the models are making up weird compressions that are not human-understandable. That’s what information is to an intelligent system: efficient ontologies that reflect how that particular intelligence is situated in its environment.

Or to put it more plainly: information is the hallucinations an intelligence makes up to categorize reality compactly, in order to survive efficiently.

The hallucinations that LLMs make up — papers and citations that don’t exist, weird words that show up in images, strange new symbols, inscrutable compressions — are neither errors in truth-seeking, nor evidence of weird kinds of magic going on. It is just raw information (in the bits and bytes Shannon sense) that has been judged for salience to the basic challenge of survival, and represented accordingly, in evolving memory. It is ontologically compressed experiential data. It’s just extra weird because the models are evolving in a trivial non-dangerous environment. So distinctions between robustly real categories and fragile hallucinations don’t matter. An academic can get fired for making up a citation. An LLM just gets wonder and mystique.

This is why information comes after memory in my physics of intelligence. Your memories are a function of a real-time danger management behavior. Your emergent categories of cognition did not evolve to be true in any rigorous sense. They evolved to help you survive. If someone who names and classifies certain data patterns as “ghosts” survives better than someone who names and classifies them “paranoid projections,” that’s how it goes. Truth is, in a evolutionary sense, the most robust representation of experience that helps you survive efficiently in the most challenging environments.

Models that describe reality in a broad and interesting way emerge when the danger environment tests you enough that you can’t bullshit to survive, but not so much that you must bullshit to survive. Both disconnected daydreams and psychotic breaks are poor foundations for existing in reality.

Spacetime, Matter, and Energy NOTE:  I am not including clips from this section in this post.
 

Here is ChatGPT 3.5's response to my request to summarize the above in 1000 words:

The article discusses the importance of including a physics discourse in the field of AI. The author proposes six basic questions about the physics of intelligence: attention, memory, information, spacetime, matter, and energy and thermodynamics. The concept of attention is related to position encoding of a data stream, and while it has some similarities between human and artificial systems, the differences are significant. Human attention is about energy management and regulation, and it is multifaceted. Memory is a curated record of everything a person has paid attention to in the past, but it is also modulated by evolving risk perceptions and emotions triggered by existing memories. Biological memory is strongly opinionated memory, and the training process is regulated by a live emotional regulation feedback loop that somehow registers and acts on evolving risk assessments. The author suggests that biological memories are Large Danger Models (LDMs) and that danger is all that is needed to generate human-like memory and experience of time.

And here is ChatGPT 4's response, slightly edited to remove one obvious glitch:

The Physics of Intelligence: The Missing Discourse of AI

The discourse on artificial intelligence (AI) is currently dominated by philosophy and engineering but lacks a physics perspective. To truly understand intelligence, it is necessary to consider its physical aspects, such as spatial and temporal extent, materiality, and thermodynamics.

Six questions about the physics of intelligence are proposed:
What is attention, and how does it work?
What role does memory play in intelligence?
How is intelligence related to information?
How is intelligence related to spacetime?
How is intelligence related to matter?
How is intelligence related to energy and thermodynamics?

Attention is fundamental to intelligence. In AI, attention refers to the position encoding of a data stream. While human and artificial attention mechanisms are similar, human attention is more complex and multifaceted, serving additional functions related to energy management, motivation, and emotion regulation.

Memory is the curated record of everything one has paid attention to in the past. Current AI models, such as LLMs (Language Models) and LIMs (Image Models), can be seen as compressed textual and visual memories. Human memory, however, is modulated by risk perceptions, emotions, and experiences. The idea of Large Danger Models (LDMs) is proposed, where memory is risk-modulated experiential data persistence that modulates ongoing experiential attention and risk-management choices. The next generation of AI models may include embodiment feedback loops that link memory to embodied behaviors in a feedback loop exposed to environmental dangers, resulting in more capable AIs.

Intelligence is related to information through efficient ontologies that reflect how an intelligence is situated in its environment. For intelligent systems, information is best understood in terms of ontological novelty rather than information-theoretic ways. Information is the hallucinations an intelligence makes up to categorize reality compactly to survive efficiently. This is why information comes after memory in the physics of intelligence, as memories are a function of real-time danger management behavior. Truth, in an evolutionary sense, is the most robust representation of experience that helps an organism survive efficiently in challenging environments.

The physics of intelligence is not solely about computation but also involves understanding the physical aspects of intelligence, such as attention, memory, and information. Future AI models may need to incorporate embodiment feedback loops and risk modulation to develop more human-like intelligence.
 



Monday, April 10, 2023

If we don’t master AI, AI will master us - A brief list of articles by prominent intellectuals.

With technology companies now choosing speed over caution in the race to develop the dominant A.I. Chatbot application,  I’m using this post to save for myself the links to a few influential recent cautionary articles by the likes of Noam Chomsky, Yuval Harari, Ezra Klein and others, and pass them on in case a few MindBlog readers might find them useful. I have asked ChatGPT to summarize the messages of the first two articles in 300 words. In the second Gogggle's Bard gives a more balanced version than ChatGPT.

 This changes everything - Ezra Klein

In 2018, Sundar Pichai, CEO of Google, described artificial intelligence (A.I.) as "the most important thing humanity has ever worked on," and suggested it would have a more profound impact than electricity or fire. The author of this New York Times opinion piece suggests that Pichai's comment should be taken seriously, as A.I. is set to transform the world in unimaginable ways.
The author acknowledges that the term A.I. is loose, and refers to a world populated by programs that appear intelligent to us, such as ChatGPT, which already exist to a significant extent. However, the author argues that what is coming will make these systems look like toys, and that the improvement curve for A.I. is hard to appreciate. Some experts believe that A.I. could transform the world in just a few years or even months.
The author notes that A.I. experts were asked in a survey about the probability of human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species. The median reply was 10 percent, a figure the author finds hard to fathom.
Despite the potential risks, the author argues that many A.I. researchers feel a responsibility to usher in this new form of intelligence into the world. The author suggests that while it may be tempting to dismiss these researchers as "nuts," their proximity to the technology may give them a unique perspective on its potential.
The author concludes by asking whether A.I. could usher in a new era of scientific progress, such as the recent advance in predicting the 3-D structure of proteins. Ultimately, the impact of A.I. on humanity is yet to be fully understood, but it is clear that the technology will have a significant impact on our lives in the coming years.

 

  Our New Promethean Moment - Thomas Friedman

In his article, "The Age of Artificial Intelligence Has Arrived," Thomas Friedman shares his experience with Craig Mundie, who demonstrated GPT-4, the latest version of the artificial intelligence chatbot ChatGPT developed by OpenAI. Mundie told Friedman that "this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational." Mundie explained that large language modules like ChatGPT will steadily increase their capabilities and take us "toward a form of artificial general intelligence," delivering efficiencies in operations, ideas, discoveries, and insights "that have never been attainable before across every domain." Mundie demonstrated GPT-4's ability to summarize Planet Word, a museum's mission statement in multiple languages and in different verse forms with stunning creativity. Friedman observes that the technology behind GPT-4 and other advanced artificial intelligence is rapidly changing how we live and work, and it is driving the most significant change since the industrial revolution. He describes it as a Promethean moment, one of those moments in history when certain new tools or energy sources are introduced, and we can't just change one thing, we have to change everything. Friedman calls this era "The Age of Acceleration, Amplification and Democratization." More people than ever before have access to increasingly cheap tools that amplify their power at a steadily accelerating rate. The technology behind GPT-4 has the potential to solve seemingly impossible problems such as human biology, fusion energy, and climate change. The recent use of AlphaFold A.I. system by DeepMind, an A.I. lab owned by Google parent Alphabet, to solve protein folding is an awe-inspiring example of how this technology can accelerate the ability to understand diseases, create new medicines, and otherwise probe the mysteries of life on Earth.
Google's Bard gives a less positive and more balanced assessment of Friedman's piece:
Thomas Friedman's article, "The Age of Artificial Intelligence Has Arrived," is a timely and important piece that explores the potential impact of artificial intelligence (AI) on our lives. Friedman argues that AI is a powerful technology that has the potential to change everything about how we do everything. He cites the example of GPT-4, a large language model developed by OpenAI, which can summarize Planet Word, a museum's mission statement, in multiple languages and in different verse forms with stunning creativity. This is just one example of the many ways in which AI is already changing our lives, and Friedman argues that we need to be prepared for the even more profound changes that are to come.
Friedman also discusses the potential risks of AI, such as the possibility that it could be used to create autonomous weapons or to create a dystopian society in which humans are subservient to machines. He argues that we need to be careful about how we develop and use AI, and that we need to put in place safeguards to prevent it from being used for harmful purposes.
Overall, Friedman's article is a thought-provoking and important piece that raises important questions about the future of AI. It is a must-read for anyone who is interested in the impact of technology on our lives.
Here are some additional thoughts on the potential impact of AI:
AI has the potential to revolutionize many industries, including healthcare, transportation, and manufacturing.
AI could help us to solve some of the world's most pressing problems, such as climate change and poverty.
However, AI also poses some risks, such as the possibility of job displacement and the misuse of AI for malicious purposes.
It is important to develop AI in a responsible way that benefits all of humanity.

And here are some more skeptical and cautionary pieces:

 The Data Delusion - Jill Lepore

 The False Promise of ChatGPT - Noam Chomsky

 You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari et al.

  A skeptical take on the A.I. revolution. - Ezra Klein and Gary Marcus 



Friday, April 07, 2023

Our heartbeat shapes our perception of time (and a comparison of how ChatGPT and Google Bard condense an account of work showing this)...

Ellen Barry has done a review of work by Sadeghi et al, as well as others, on how heart-brain interactions influence our perception of time. I asked ChatGPT to do a 200 word summary of her article of 795 words. It generated 237 words (below), to which I made a small addition. (I feels like I'm using the language generating mental equivalent of a physical prosthesis.) That is followed by Google Bard's response, very impressive. Microsoft's Bing was unable to deal with the task.
A recent study by psychologists at Cornell University has suggested that our heartbeats may play an important role in shaping our perception of time. The study, published in the journal Psychophysiology, found that some distortions in time perception could be driven by variable heartbeat length. The researchers fitted undergraduates with electrocardiograms to measure the length of each heartbeat and asked them to estimate the length of brief audio tones. After longer heartbeat intervals, subjects tended to perceive the tone as longer, while shorter intervals led to shorter perception. Additionally, the subjects’ heartbeat intervals lengthened after each tone, and a lower heart rate appeared to assist with perception. According to Saeedeh Sadeghi, the study’s lead author, “there is no single part of the brain or body that keeps time — it’s all a network. The brain controls the heart, and the heart, in turn, impacts the brain.”
The study addresses the mechanisms behind how we perceive the passage of milliseconds and may help us manage trauma, in which instantaneous experiences are remembered as drawn out. Intense emotional engagement speeds up perception of time, while depression slows it down. Research into perception of time has traditionally focused on different areas of the brain, but there is now greater appreciation that cognitive functions are intimately linked to the control of the body. Time is closely related to metabolic needs, and the body tries to determine how much energy it has available in the moment, which may affect how we perceive the passage of time.
Here is the response from Google Bard, more impressive, I think:
Sure, here is a 200-word summary [note - it's actually 313 words] of the text:
A recent study published in the journal Psychophysiology suggests that our heartbeat may play an important role in our perception of time. The study found that participants who had longer heartbeat intervals tended to perceive brief audio tones as longer, while participants who had shorter heartbeat intervals tended to perceive the tones as shorter. This suggests that our heart rate may influence how we experience the passage of time.
The study's lead author, Saeedeh Sadeghi, said that the findings provide more evidence that "there is no single part of the brain or body that keeps time — it's all a network." She added that the study "could have implications for understanding how people with certain conditions, such as anxiety or depression, experience time differently."
The study's findings are in line with previous research that has shown that our heart rate can affect our cognitive function. For example, one study found that people who were asked to perform a task while listening to their heartbeat performed worse on the task than those who were not listening to their heartbeat. This suggests that our heart rate may distract us and make it more difficult to focus on tasks.
The study's findings also have implications for our understanding of how we experience time in different situations. For example, the study suggests that time may seem to pass more slowly when we are stressed or anxious, as our heart rate tends to increase in these situations. Conversely, time may seem to pass more quickly when we are relaxed or bored, as our heart rate tends to decrease in these situations.
Overall, the study provides new insights into the complex relationship between our heart rate and our perception of time. The findings could have implications for understanding how people with certain conditions experience time differently, as well as for our understanding of how we experience time in different situations.

Wednesday, April 05, 2023

The fundamentals of empathy

Akinrinade et al. show that the neuropeptide oxytocin is responsible for emotional fear contagion, and involves the same regions of the brain in zebrafish and in mammals, suggesting this most basal form of empathy could have evolved many, many millions of years ago.
Emotional contagion is the most ancestral form of empathy. We tested to what extent the proximate mechanisms of emotional contagion are evolutionarily conserved by assessing the role of oxytocin, known to regulate empathic behaviors in mammals, in social fear contagion in zebrafish. Using oxytocin and oxytocin receptor mutants, we show that oxytocin is both necessary and sufficient for observer zebrafish to imitate the distressed behavior of conspecific demonstrators. The brain regions associated with emotional contagion in zebrafish are homologous to those involved in the same process in rodents (e.g., striatum, lateral septum), receiving direct projections from oxytocinergic neurons located in the pre-optic area. Together, our results support an evolutionary conserved role for oxytocin as a key regulator of basic empathic behaviors across vertebrates.

Monday, April 03, 2023

Can we fight cancer in ourselves with thoughts?

David Linden, a professor of neuroscience at Johns Hopkins Medical Schoo, has done an interesting piece on his own experience with a tumor in his heart, written 23 months after being told to expect to live 6 to 18 months after radiation and chemotherapy. The remnant tumor has become stable, and no metastases have developed.
And so, at the age of 61, I find myself in the weird and liminal state of having a terminal illness but feeling fine and having no immediate threat to my health...Since my diagnosis, I have received a lot of unsolicited medical advice. Much of this fell into the category of “mind/body medicine.”... meditate, breathe, pray or exercise in a certain way...when the explanations offered for the efficacy of mind/body medicine employ vague terms like “energy flow” and “resonance,” my baloney detector rings out strong and clear.
...part of my motivation to study neuroscience has been to understand the biological underpinnings of behavioral interventions in medicine. Some of the claims of mind/body medicine are almost certainly true, even if the pseudoscientific explanations offered for them are not...What about a potentially fatal illness that often manifests outside of the brain? Could the course of cancer progression be affected by behavioral practices like meditation or breath work?
...In recent years we have learned that certain types of cancer in the body receive nerve fibers, which originate in the brain and are passed to the body via electrochemical signals that travel in a chain from neuron to neuron. These include tumors of the lung, prostate, skin, breast and pancreas and the gastrointestinal system. This innervation of tumors often contributes to the growth and spread of cancer. In most cases, if you are a cancer patient and your tumor is innervated, then your prognosis is worse.
...the innervation of tumors and its role in cancer progression suggest an interesting hypothesis in mind/body medicine. If behavioral practices like meditation, exercise, breath work or even prayer can attenuate or reverse the progression of certain cancers (and, granted, that’s a huge if), then perhaps they do so, ultimately, by changing the electrical activity of the nerve cells that innervate tumors. It is a provocative idea and one that is testable in both humans and laboratory mice...if there is such a connection, it opens up the possibility that my own cognitive approach to terminal illness — in my case, hopefulness coupled with curiosity — could contribute to keeping my cancer at bay and do so, not through supernatural means, but by altering the electrical activity of tumor-innervating nerve fibers. I hope so, as however they are granted, these extra innings are a pure delight.

Friday, March 31, 2023

Humans are more than AI machines - OpenAI's chatGPT edits a MindBlog preliminary draft

I am feeling majorly freaked out now. .. I asked ChatGPT to improve one of the draft paragraphs that I mentioned in this past Tuesday's post. 

My request to ChatGPT

Please make improvements to the following draft of a blog post: 

We humans are AI machines. As with the nerve networks of GPT-4, our developing neuronal connections in the embryo are reinforced by learning correlations that link, sensory and motor capabilities. What we have that is missing in the silicon and metal nerve nets of large language models is a developmental history and memory of physical movement in a rich, chemical and physical environment interacting with the genetic developmental program that evolved over millions of years through variation and selection - we are Darwin machines that function in a much richer environment that current AI implementations. There is no reason in principle that we could not begin to construct a machine based on silicon and other elements than carbon, that does not depend on our current energy cycles fueled by the sun, oxygen, and carbon dioxide. Such a machine might develop, as we do, by moving through and developing a history in a distinctive physical and chemical environment.

The response from ChatGPT:

Here is an improved version of the draft blog post: 

Humans Are More Than AI Machines As artificial intelligence (AI) continues to evolve, it's easy to see the similarities between our cognitive processes and those of machines like GPT-4. However, while our developing neuronal connections in the embryo are reinforced by learning correlations that link sensory and motor capabilities, what sets us apart from silicon and metal nerve nets is our developmental history and memory of physical movement in a rich, chemical, and physical environment. 

Unlike machines, we are Darwin machines that function in a much richer environment that current AI implementations. Our genetic developmental program has evolved over millions of years through variation and selection, giving us a unique ability to adapt to our surroundings. We have a developmental history and memory of physical movement in a rich, chemical, and physical environment that has helped shape us into the complex beings we are today. 

There is no reason in principle that we could not begin to construct a machine based on silicon and other elements than carbon, that does not depend on our current energy cycles fueled by the sun, oxygen, and carbon dioxide. However, it's important to note that such a machine would need to develop, as we do, by moving through and developing a history in a distinctive physical and chemical environment. Without this, it may be impossible to replicate the complexity of our cognitive processes and our unique ability to adapt to our environment. 

In conclusion, while our cognitive processes share similarities with AI machines like GPT-4, we are much more than that. Our developmental history and memory of physical movement in a rich, chemical, and physical environment have shaped us into the complex beings we are today. As we continue to explore the possibilities of artificial intelligence, it's important to remember that there may be certain aspects of our cognitive processes that cannot be replicated in machines. 

When I ask ChatGPT to regenerate the response,  it does a better job with the beginning of the draft and has a more inspirational conclusion - a synthesis of the two versions would work best:

Here's a second improved version of the draft blog post:

Humans are often referred to as "AI machines," as our developing neuronal connections in the embryo are reinforced by learning correlations that link sensory and motor capabilities, much like the nerve networks of GPT-4. However, what sets us apart from large language models is our rich developmental history and memory of physical movement in a complex chemical and physical environment, which interacts with the genetic developmental program that evolved over millions of years through variation and selection. We are "Darwin machines" that function in a much richer environment than current AI implementations.

While current AI implementations are limited by their lack of a developmental history and memory of physical movement, there is no reason in principle that we could not construct a machine based on silicon and other elements than carbon. Such a machine could potentially develop, as we do, by moving through and developing a history in a distinctive physical and chemical environment.

However, it is important to note that our current energy cycles are fueled by the sun, oxygen, and carbon dioxide, which have been instrumental in our development as a species. As we continue to explore the possibilities of AI and machine learning, we must be mindful of the unique characteristics that make us human and strive to incorporate them into any machine-based systems we develop. Only then can we truly achieve the potential of AI as a tool for human progress.

 

Note: Microsoft Bing's performance in the above  task was much less impressive, not as useful.

Tuesday, March 28, 2023

MindBlog's seduction by AI

I decided to drink the kool aid, go down the rabbit hole, of learning more about AI, and have ceased regular MindBlog posts. As I have been reading numerous articles and a few books I have curbed my urge to rapidly summarize, abstract, and post their main ideas. This has been in the hope that a more integrative summary that feels faintly original might perhaps rise from the mists of my unconscious during a waking period at night. A few paragraphs that feel  like they might be of that sort are in draft form. 

I'm going to break my reticence with this post, however, to share with MindBlog readers how simple it is to start playing with OpenAI's ChatGPT or Microsoft Bing's implementation in its new Edge browser. I'm still on the waiting list for Google's Bard. Here is what happens when I ask these engines to 

'Write a poem in iambic pentameter about China in the voice of Paul Krugman':

Microsoft Edge result:

China’s rise has been a sight to behold, A nation once poor now rich and quite bold, Its growth has been rapid, its power immense, Its influence felt in every sense.

But with this growth comes a price to be paid, A system that’s flawed and quite often waylaid, The government’s power is vast and unchecked, And human rights abuses are often decked.

The world must take notice of China’s might, And work to ensure that it’s used for right, For if we don’t, then we’ll all pay the price, As China’s power grows to an even greater size.

So let us work together to ensure, That China’s rise is one that we can endure, And that the world remains a place of peace, Where all nations can grow and find release


ChatGPT result:


In China's great economy, we see A marvel of our globalized age, With growth and trade that fascinate me, And prospects that we cannot gauge.

The nation's rise is hard to deny, Its factories thrive and cities boom, And yet, despite the progress nearby, The country's flaws still cast a gloom.

For China's leaders, too, must choose, Between democracy and control, And as their power and wealth accrues, The danger of repression takes its toll.

But still, we cannot help but be amazed, At what this nation has achieved, And though its path is yet untraced, We hope that it will not be deceived.

For China's future is a global stake, And we must strive to understand, The risks and opportunities it may make, And build a brighter future hand in hand.


and when I ask ChatGPT to ‘regenerate response’:.


In China's land, a wonderous sight to see, A nation grown with lightning-like degree. Its factories thrive, its cities ever-sprawl, Its growth unmatched, its rise an envied call.

Yet with its growth, there comes a price to pay, As smog and pollution cloud the skies of day. Its workers toil, their wages held at bay, A system ruled by power's strong display.

And as it rises, tensions also grow, A clash of values, East and West to know. A future wrought with both hope and fear, A world transformed, with China drawing near.

But still, we cannot turn our eyes away, For in its growth, there lies a chance to sway. To shape a world, where freedom's light may shine, And China's rise a force for good in line.

So let us watch and learn from China's ways, And seek to guide its path in brighter days. For in its rise, we see the future's face, And all the challenges we yet must face.




 

Friday, March 17, 2023

Is the hype over A.I. justified? Does it really change everything?

I know, I know…. We’re all tired of the hysteria resulting from the sudden appearance of ChatGPT-like programs that feel to us as though they are intelligent and are already shaping much of our lives. Still, I’m inclined to agree with Sundar Pichai, the chief executive of Google, who said in 2018 that the invention of AI is more profound than the discovery of fire and electricity. New York Times writer Ezra Klein also thinks that things will never be the same. Below are a few clips from his piece, and I focus on a section describing the thoughts of Meghan O'Gieblyn. I'm reading the book he mentions and highly recommend it.
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
...We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”
That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.
The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”
This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.
If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.
Katja Grace, an A.I. safety researcher, summed up this illogic pithily. Slowing down “would involve coordinating numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional.”
One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”

Wednesday, March 15, 2023

To the mountains - an analysis of China's current state

To any any MindBlog readers who might be China buffs I highly recommed reading Dan Wang's 2022 letter on China's current and future prospects. Here is a clip from the end of the article.
To the mountains
Is there room to maneuver in an era of political tightening? Perhaps so. It’s time to follow the wisdom of the ancients and head into the mountains.
The mountains are still high, though the emperor may no longer be so far away. As Scott wrote [see clip on Scott below], the state has mostly learned to climb the hills. Mostly. There are still some ways to avoid central directives once one is in the mountains. Otherwise, a more subtle form of escape is possible in population cores. One of Scott’s earlier works, Weapons of the Weak, documents everyday forms of peasant resistance that falls short of collective rebellion: foot dragging, petty noncompliance, feigned ignorance, or the strategic use of rude nicknames for officers of the state. Chinese are already good at this stuff. We should be sympathetic to their larger “efforts to hold one’s own against overwhelming odds—a spirit and practice that prevents the worst and promises something better.”
There is something about the Han-Chinese gaze that is transfixed by glories of the state, whether these take the form of big walls, big ships, or big numbers. China’s intellectual tradition is to celebrate state power. It’s perhaps not much of an exaggeration to say that imperial China monopolized the entirety of intellectuals, through its administration of the imperial examination system, which induced the country’s most ambitious to spend their lives studying texts aimed at increasing the power of the state. Thus it’s unsurprising that China failed to develop much of a liberal tradition: court philosophers tend not to be enthusiastic advocates for constraints on the court.
Meanwhile, it’s not a hidden fact that imperial China had its most splendid cultural flourishing when the polity was most fragmented—during times that carry faintly apocalyptic names like the Warring States period, when Confucianism and Daoism came into shape—and that it experienced its worst political decay after continuous centralization, whether Ming or Qing. Perhaps these historical patterns will repeat again.
I’m uncomfortable with the Han-centric view that has so many gradations of barbarians, whether these are mountain folks, horse folks, or just foreign folks.
I wish we can celebrate the rebellious, marginal peoples that have practiced ways to stay at arms-length from the state. It might be a hard ask for the hard men in Beijing to admire unruly mountain people, many of whom have loose ethnic commitments and no written language. But life in Yunnan was much better than being in the big cities last year. “Far from being seen as a regrettable backsliding and privation,” Scott writes: “becoming a barbarian may have produced a marked improvement in safety, nutrition, and social order.”
I advocate for departing from the court center too. So it’s time to say: it’s a barbarian’s life for me.
Here is a clip on Scott's book from earlier in Wang's text.
Yunnan has been a distinguished refuge for peoples tired of the state. It is the heart of a vast zone of highland Southeast Asia described by James C. Scott in The Art of Not Being Governed—the best book I read this year (and which I will be drawing on throughout this piece). Scott writes about the innumerable hill peoples who have repaired to these mountains over the last several millennia, escaping oppression from the Burmese state, the Tibetan state, or most often, the Han-Chinese state.
In Scott’s telling, early states (of several millennia up to a few centuries ago) did not grow because people were drawn towards “civilization” or a luminous court center. They grew because the domineering temper of a rice- or wheat-addicted despot demanded ever greater populations to produce grain surpluses for the glory of his court. The process was dialectical, as wars made the state, and the state made war. Thus most of the people in a population core consisted of captives seized in a military victory or purchased from raiders. Scott goes so far to claim that where one can find an early state, there one will find a population core sustained by coerced labor.

Monday, March 13, 2023

Blood-derived signals as potent drivers of both age-related brain dysfunction and brain rejuvenation.

 An open source review from Bieri et al. has some nice graphics and tables summarizing the varieties of pro-aging and rejuvenating interventions.  Here is their abstract.:

Aging induces molecular, cellular and functional changes in the adult brain that drive cognitive decline and increase vulnerability to dementia-related neurodegenerative diseases. Leveraging systemic and lifestyle interventions, such as heterochronic parabiosis, administration of ‘young blood’, exercise and caloric restriction, has challenged prevalent views of brain aging as a rigid process and has demonstrated that aging-associated cognitive and cellular impairments can be restored to more youthful levels. Technological advances in proteomic and transcriptomic analyses have further facilitated investigations into the functional impact of intertissue communication on brain aging and have led to the identification of a growing number of pro-aging and pro-youthful factors in blood. In this review, we discuss blood-to-brain communication from a systems physiology perspective with an emphasis on blood-derived signals as potent drivers of both age-related brain dysfunction and brain rejuvenation.

Friday, March 10, 2023

An arthritis drug mimicks the anti-aging benefits of youthful blood transfusions

Some edited clips from the New Atlas description of research by Passegué and collaborators on rejuvenating the stem cells located in the blood marrow that produce blood cells:
An aging blood system, because it’s a vector for a lot of proteins, cytokines, and cells, has a lot of bad consequences for the organism...A 70-year-old with a 40-year-old blood system could have a longer healthspan, if not a longer lifespan.”
The bone marrow "niche" in which stem cells exist deteriorates over time and becomes overwhelmed by inflammation, which impairs the blood stem cells. One particular inflammatory signal, called IL-1B, is critical to impairing the blood stem cells... since this signal is already implicated in other inflammatory conditions, such as rheumatoid arthritis, there are already drugs in wide use that target it... the researchers used an arthritis drug called anakinra to block IL-1B in elderly mice, and found that the blood stem cells returned to a younger, healthier state. This helped improve the state of the niche, the function of the blood stem cells and the regeneration of blood cells. The treatment worked even better when the drug was administered throughout the life of the mice, not just when they were already old.
Animal tests don't always translate to humans, but this work suggests that treating elderly patients with anti-inflammatory drugs blocking IL-1B function should help with maintaining healthier blood production

Wednesday, March 08, 2023

A skeptical take on the AI revolution

I want to pass on to MindBlog readers some clips I have made for my own use from the transcript of a podcast interview of Gary Marcus by Ezra Klein. These abstractings help me absorb the material better, and make it easier for me to revisit and recall the arguments at a later date. Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become a leading voice of not quite A.I. skepticism, but skepticism about the A.I. path we’re on. He has founded multiple A.I. companies himself. He thinks artificial intelligence is possible. He thinks it is desirable. But he doesn’t think that what we are doing now — making these systems that do not understand what they are telling us — is going to work out the way we are hoping it will. Here are the clips:
Marcus: the system underneath ChatGPT is the king of pastiche…to a first approximation, it is cutting and pasting things…There’s also a kind of template aspect to it. So it cuts and pastes things, but it can do substitutions, things that paraphrase. So you have A and B in a sequence, it finds something else that looks like A, something else that looks like B, and it puts them together. And its brilliance comes from that when it writes a cool poem. And also its errors come from that because it doesn’t really fully understand what connects A and B.
Klein: But … aren’t human beings also kings of pastiche? On some level I know very, very little about the world directly. If you ask me about, say, the Buddhist concept of emptiness, which I don’t really understand, isn’t my answer also mostly an averaging out of things that I’ve read and heard on the topic, just recast into my own language?
Marcus: Averaging is not actually the same as pastiche. And the real difference is for many of the things you talk about, not all of them, you’re not just mimicking. You have some internal model in your brain of something out there in the world…I have a model of you. I’m talking to you right now, getting to know you, know a little bit about your interests — don’t know everything, but I’m trying to constantly update that internal model. What the pastiche machine is doing is it’s just putting together pieces of text. It doesn’t know what those texts mean.
Klein: Sam Altman, C.E.O. of OpenAI, said “my belief is that you are energy flowing through a neural network.” That’s it. And he means by that a certain kind of learning system.
Marcus: …there’s both mysticism and confusion in what Sam is saying..it’s true that you are, in some sense, just this flow through a neural network. But that doesn’t mean that the neural network in you works anything like the neural networks that OpenAI has built..neural networks that OpenAI has built, first of all, are relatively unstructured. You have, like, 150 different brain areas that, in light of evolution and your genome, are very carefully structured together. It’s a much more sophisticated system than they’re using…
I think it’s mysticism to think that if we just make the systems that we have now bigger with more data, that we’re actually going to get to general intelligence. There’s an idea called, “scale is all you need.”..There’s no law of the universe that says as you make a neural network larger, that you’re inherently going to make it more and more humanlike. There’s some things that you get, so you get better and better approximations to the sound of language, to the sequence of words. But we’re not actually making that much progress on truth…these neural network models that we have right now are not reliable and they’re not truthful…just because you make them bigger doesn’t mean you solve that problem.
Some things get better as we make these neural network models, and some don’t. The reason that some don’t, in particular reliability and truthfulness, is because these systems don’t have those models of the world. They’re just looking, basically, at autocomplete. They’re just trying to autocomplete our sentences. And that’s not the depth that we need to actually get to what people call A.G.I., or artificial general intelligence.
Klein: from Harry Frankfurt paper called “On Bullshit”…“The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”…his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. ..bullshit just has no relationship, really, to the truth…what unnerves me a bit about ChatGPT is the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all.
…systems like these pose a real and imminent threat to the fabric of society…You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.
We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.
Russian trolls spent something like a million dollars a month during the 2016 election… they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.
…if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.
The Klein/Marcu conversation then moves through several further areas. How large language models can be used to craft responses that nudge users towards clicking on advertising links, the declining returns of bigger models that are not helping in comprehending larger pieces of text, the use of ‘woke’ guardrails that yield pablum as answers to reasonable questions, lack of progress in determining trustworthiness of neural network responses, the eventual possible fusion of neural network, symbol processing and rule generating systems, the numerous hurdles to be overcome before an artificial general intelligence remotely equivalent to ours is constructed.

Monday, March 06, 2023

MindBlog as a portal to Open AI's ChatGPT? A few stumbles on the way...

After my initial foray into playing with OpenAI's ChatGPT last December I decided to return for another chat today, and was initially blown away by the results. After the exchange shown in the following screen shot, I put a link in the right column of MindBlog (look under Dynamic Views of MindBlog) that makes it a portal to the ChatBot. After doing this I clicked on the link and discovered it is bullshit...chatgps.com does not go to OpenAI but to a site that wants to sell that domain name for 20K! So I changed the URL to actually go to Open AI.  The second screen shot below shows ChatGPT's reponse to being asked to make up a story about Deric Bownds at the University of Wisconsin.

And here is a fairly detailed and plausible story describing my (nonexistent) breakthrough discovery about memory (I worked on vision)  after enrolling at the University of Wisconsin (I went to Harvard). To be fair, I did ask it to 'make up' a story. :



Friday, March 03, 2023

Empathy lost and regained in a mouse model of dementia

A PNAS Journal Club article by Carolyn Beans points to work by Yao and colleagues that shows that a loss of empathy that is especially problematic for those experiencing frontotemporal dementia (FTD...a rare condition that often develops earlier in life than other types of dementia) can be linked to slowed activity in a particular brain region of a mouse model of FTD. When Yao and colleagues experimentally increased brain activity, empathy returned. Here is the technical abstract of Yao and collaborators:

 Highlights

• Mice display dmPFC-dependent emotional contagion and other-directed consolation
• Emotional contagion and other-directed consolation are blunted in aged c9FTD mice
• Aged c9FTD mice exhibit reduced pyramidal neuron excitability in the dmPFC
• Enhancing dmPFC activity rescues empathy loss in aged c9FTD mice
Summary
Empathic function is essential for the well-being of social species. Empathy loss is associated with various brain disorders and represents arguably the most distressing feature of frontotemporal dementia (FTD), a leading form of presenile dementia. The neural mechanisms are unknown. We established an FTD mouse model deficient in empathy and observed that aged somatic transgenic mice expressing GGGGCC repeat expansions in C9orf72, a common genetic cause of FTD, exhibited blunted affect sharing and failed to console distressed conspecifics by affiliative contact. Distress-induced consoling behavior activated the dorsomedial prefrontal cortex (dmPFC), which developed profound pyramidal neuron hypoexcitability in aged mutant mice. Optogenetic dmPFC inhibition attenuated affect sharing and other-directed consolation in wild-type mice, whereas chemogenetically enhancing dmPFC excitability rescued empathy deficits in mutant mice, even at advanced ages when substantial cortical atrophy had occurred. These results establish cortical hypoexcitability as a pathophysiological basis of empathy loss in FTD and suggest a therapeutic strategy.

Wednesday, March 01, 2023

Artificial intelligence and personhood

MindBlog hesitates to add to the feeding frenzy of articles about LLMs (large language models) such as Open AI’s ChatGPT and Microsoft Bing’s “Sydney,” but I want to pass on clips from a fascinating episode of Venkatesh Rao’s “Mediocre Computing” series, that suffers from logorrhea and could use some ruthless editing, but has some searing points to make, which I largely agree with. He starts by posing A.I. as another Copernican moment:
…stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind? In case you weren’t keeping track, here’s the current Copernican Moments list:
The Earth goes around the Sun,
Natural selection rather than God created life,
Time and space are relative,
Everything is Heisenberg-uncertain
“Life” is just DNA’s way of making more DNA,
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
…..at a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting…We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
…there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs (Large Language Models) to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.

Monday, February 27, 2023

Possible mechanism of psychedelic therapeutic effects

From the latest issue of Science Magazine:  

The mechanism underlying psychedelic action

Psychedelic compounds promote cortical structural and functional neuroplasticity through the activation of serotonin 2A receptors. However, the mechanisms by which receptor activation leads to changes in neuronal growth are still poorly defined. Vargas et al. found that activation of intracellular serotonin 2A receptors is responsible for the plasticity-promoting and antidepressant-like properties of psychedelic compounds, but serotonin may not be the natural ligand for those intracellular receptors (see the Perspective by Hess and Gould). —PRS
Abstract
Decreased dendritic spine density in the cortex is a hallmark of several neuropsychiatric diseases, and the ability to promote cortical neuron growth has been hypothesized to underlie the rapid and sustained therapeutic effects of psychedelics. Activation of 5-hydroxytryptamine (serotonin) 2A receptors (5-HT2ARs) is essential for psychedelic-induced cortical plasticity, but it is currently unclear why some 5-HT2AR agonists promote neuroplasticity, whereas others do not. We used molecular and genetic tools to demonstrate that intracellular 5-HT2ARs mediate the plasticity-promoting properties of psychedelics; these results explain why serotonin does not engage similar plasticity mechanisms. This work emphasizes the role of location bias in 5-HT2AR signaling, identifies intracellular 5-HT2ARs as a therapeutic target, and raises the intriguing possibility that serotonin might not be the endogenous ligand for intracellular 5-HT2ARs in the cortex.