Tuesday, March 28, 2023

MindBlog's seduction by AI

I decided to drink the kool aid, go down the rabbit hole, of learning more about AI, and have ceased regular MindBlog posts. As I have been reading numerous articles and a few books I have curbed my urge to rapidly summarize, abstract, and post their main ideas. This has been in the hope that a more integrative summary that feels faintly original might perhaps rise from the mists of my unconscious during a waking period at night. A few paragraphs that feel  like they might be of that sort are in draft form. 

I'm going to break my reticence with this post, however, to share with MindBlog readers how simple it is to start playing with OpenAI's ChatGPT or Microsoft Bing's implementation in its new Edge browser. I'm still on the waiting list for Google's Bard. Here is what happens when I ask these engines to 

'Write a poem in iambic pentameter about China in the voice of Paul Krugman':

Microsoft Edge result:

China’s rise has been a sight to behold, A nation once poor now rich and quite bold, Its growth has been rapid, its power immense, Its influence felt in every sense.

But with this growth comes a price to be paid, A system that’s flawed and quite often waylaid, The government’s power is vast and unchecked, And human rights abuses are often decked.

The world must take notice of China’s might, And work to ensure that it’s used for right, For if we don’t, then we’ll all pay the price, As China’s power grows to an even greater size.

So let us work together to ensure, That China’s rise is one that we can endure, And that the world remains a place of peace, Where all nations can grow and find release


ChatGPT result:


In China's great economy, we see A marvel of our globalized age, With growth and trade that fascinate me, And prospects that we cannot gauge.

The nation's rise is hard to deny, Its factories thrive and cities boom, And yet, despite the progress nearby, The country's flaws still cast a gloom.

For China's leaders, too, must choose, Between democracy and control, And as their power and wealth accrues, The danger of repression takes its toll.

But still, we cannot help but be amazed, At what this nation has achieved, And though its path is yet untraced, We hope that it will not be deceived.

For China's future is a global stake, And we must strive to understand, The risks and opportunities it may make, And build a brighter future hand in hand.


and when I ask ChatGPT to ‘regenerate response’:.


In China's land, a wonderous sight to see, A nation grown with lightning-like degree. Its factories thrive, its cities ever-sprawl, Its growth unmatched, its rise an envied call.

Yet with its growth, there comes a price to pay, As smog and pollution cloud the skies of day. Its workers toil, their wages held at bay, A system ruled by power's strong display.

And as it rises, tensions also grow, A clash of values, East and West to know. A future wrought with both hope and fear, A world transformed, with China drawing near.

But still, we cannot turn our eyes away, For in its growth, there lies a chance to sway. To shape a world, where freedom's light may shine, And China's rise a force for good in line.

So let us watch and learn from China's ways, And seek to guide its path in brighter days. For in its rise, we see the future's face, And all the challenges we yet must face.




 

Friday, March 17, 2023

Is the hype over A.I. justified? Does it really change everything?

I know, I know…. We’re all tired of the hysteria resulting from the sudden appearance of ChatGPT-like programs that feel to us as though they are intelligent and are already shaping much of our lives. Still, I’m inclined to agree with Sundar Pichai, the chief executive of Google, who said in 2018 that the invention of AI is more profound than the discovery of fire and electricity. New York Times writer Ezra Klein also thinks that things will never be the same. Below are a few clips from his piece, and I focus on a section describing the thoughts of Meghan O'Gieblyn. I'm reading the book he mentions and highly recommend it.
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
...We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”
That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.
The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”
This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.
If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.
Katja Grace, an A.I. safety researcher, summed up this illogic pithily. Slowing down “would involve coordinating numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional.”
One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”

Wednesday, March 15, 2023

To the mountains - an analysis of China's current state

To any any MindBlog readers who might be China buffs I highly recommed reading Dan Wang's 2022 letter on China's current and future prospects. Here is a clip from the end of the article.
To the mountains
Is there room to maneuver in an era of political tightening? Perhaps so. It’s time to follow the wisdom of the ancients and head into the mountains.
The mountains are still high, though the emperor may no longer be so far away. As Scott wrote [see clip on Scott below], the state has mostly learned to climb the hills. Mostly. There are still some ways to avoid central directives once one is in the mountains. Otherwise, a more subtle form of escape is possible in population cores. One of Scott’s earlier works, Weapons of the Weak, documents everyday forms of peasant resistance that falls short of collective rebellion: foot dragging, petty noncompliance, feigned ignorance, or the strategic use of rude nicknames for officers of the state. Chinese are already good at this stuff. We should be sympathetic to their larger “efforts to hold one’s own against overwhelming odds—a spirit and practice that prevents the worst and promises something better.”
There is something about the Han-Chinese gaze that is transfixed by glories of the state, whether these take the form of big walls, big ships, or big numbers. China’s intellectual tradition is to celebrate state power. It’s perhaps not much of an exaggeration to say that imperial China monopolized the entirety of intellectuals, through its administration of the imperial examination system, which induced the country’s most ambitious to spend their lives studying texts aimed at increasing the power of the state. Thus it’s unsurprising that China failed to develop much of a liberal tradition: court philosophers tend not to be enthusiastic advocates for constraints on the court.
Meanwhile, it’s not a hidden fact that imperial China had its most splendid cultural flourishing when the polity was most fragmented—during times that carry faintly apocalyptic names like the Warring States period, when Confucianism and Daoism came into shape—and that it experienced its worst political decay after continuous centralization, whether Ming or Qing. Perhaps these historical patterns will repeat again.
I’m uncomfortable with the Han-centric view that has so many gradations of barbarians, whether these are mountain folks, horse folks, or just foreign folks.
I wish we can celebrate the rebellious, marginal peoples that have practiced ways to stay at arms-length from the state. It might be a hard ask for the hard men in Beijing to admire unruly mountain people, many of whom have loose ethnic commitments and no written language. But life in Yunnan was much better than being in the big cities last year. “Far from being seen as a regrettable backsliding and privation,” Scott writes: “becoming a barbarian may have produced a marked improvement in safety, nutrition, and social order.”
I advocate for departing from the court center too. So it’s time to say: it’s a barbarian’s life for me.
Here is a clip on Scott's book from earlier in Wang's text.
Yunnan has been a distinguished refuge for peoples tired of the state. It is the heart of a vast zone of highland Southeast Asia described by James C. Scott in The Art of Not Being Governed—the best book I read this year (and which I will be drawing on throughout this piece). Scott writes about the innumerable hill peoples who have repaired to these mountains over the last several millennia, escaping oppression from the Burmese state, the Tibetan state, or most often, the Han-Chinese state.
In Scott’s telling, early states (of several millennia up to a few centuries ago) did not grow because people were drawn towards “civilization” or a luminous court center. They grew because the domineering temper of a rice- or wheat-addicted despot demanded ever greater populations to produce grain surpluses for the glory of his court. The process was dialectical, as wars made the state, and the state made war. Thus most of the people in a population core consisted of captives seized in a military victory or purchased from raiders. Scott goes so far to claim that where one can find an early state, there one will find a population core sustained by coerced labor.

Monday, March 13, 2023

Blood-derived signals as potent drivers of both age-related brain dysfunction and brain rejuvenation.

 An open source review from Bieri et al. has some nice graphics and tables summarizing the varieties of pro-aging and rejuvenating interventions.  Here is their abstract.:

Aging induces molecular, cellular and functional changes in the adult brain that drive cognitive decline and increase vulnerability to dementia-related neurodegenerative diseases. Leveraging systemic and lifestyle interventions, such as heterochronic parabiosis, administration of ‘young blood’, exercise and caloric restriction, has challenged prevalent views of brain aging as a rigid process and has demonstrated that aging-associated cognitive and cellular impairments can be restored to more youthful levels. Technological advances in proteomic and transcriptomic analyses have further facilitated investigations into the functional impact of intertissue communication on brain aging and have led to the identification of a growing number of pro-aging and pro-youthful factors in blood. In this review, we discuss blood-to-brain communication from a systems physiology perspective with an emphasis on blood-derived signals as potent drivers of both age-related brain dysfunction and brain rejuvenation.

Friday, March 10, 2023

An arthritis drug mimicks the anti-aging benefits of youthful blood transfusions

Some edited clips from the New Atlas description of research by Passegué and collaborators on rejuvenating the stem cells located in the blood marrow that produce blood cells:
An aging blood system, because it’s a vector for a lot of proteins, cytokines, and cells, has a lot of bad consequences for the organism...A 70-year-old with a 40-year-old blood system could have a longer healthspan, if not a longer lifespan.”
The bone marrow "niche" in which stem cells exist deteriorates over time and becomes overwhelmed by inflammation, which impairs the blood stem cells. One particular inflammatory signal, called IL-1B, is critical to impairing the blood stem cells... since this signal is already implicated in other inflammatory conditions, such as rheumatoid arthritis, there are already drugs in wide use that target it... the researchers used an arthritis drug called anakinra to block IL-1B in elderly mice, and found that the blood stem cells returned to a younger, healthier state. This helped improve the state of the niche, the function of the blood stem cells and the regeneration of blood cells. The treatment worked even better when the drug was administered throughout the life of the mice, not just when they were already old.
Animal tests don't always translate to humans, but this work suggests that treating elderly patients with anti-inflammatory drugs blocking IL-1B function should help with maintaining healthier blood production

Wednesday, March 08, 2023

A skeptical take on the AI revolution

I want to pass on to MindBlog readers some clips I have made for my own use from the transcript of a podcast interview of Gary Marcus by Ezra Klein. These abstractings help me absorb the material better, and make it easier for me to revisit and recall the arguments at a later date. Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become a leading voice of not quite A.I. skepticism, but skepticism about the A.I. path we’re on. He has founded multiple A.I. companies himself. He thinks artificial intelligence is possible. He thinks it is desirable. But he doesn’t think that what we are doing now — making these systems that do not understand what they are telling us — is going to work out the way we are hoping it will. Here are the clips:
Marcus: the system underneath ChatGPT is the king of pastiche…to a first approximation, it is cutting and pasting things…There’s also a kind of template aspect to it. So it cuts and pastes things, but it can do substitutions, things that paraphrase. So you have A and B in a sequence, it finds something else that looks like A, something else that looks like B, and it puts them together. And its brilliance comes from that when it writes a cool poem. And also its errors come from that because it doesn’t really fully understand what connects A and B.
Klein: But … aren’t human beings also kings of pastiche? On some level I know very, very little about the world directly. If you ask me about, say, the Buddhist concept of emptiness, which I don’t really understand, isn’t my answer also mostly an averaging out of things that I’ve read and heard on the topic, just recast into my own language?
Marcus: Averaging is not actually the same as pastiche. And the real difference is for many of the things you talk about, not all of them, you’re not just mimicking. You have some internal model in your brain of something out there in the world…I have a model of you. I’m talking to you right now, getting to know you, know a little bit about your interests — don’t know everything, but I’m trying to constantly update that internal model. What the pastiche machine is doing is it’s just putting together pieces of text. It doesn’t know what those texts mean.
Klein: Sam Altman, C.E.O. of OpenAI, said “my belief is that you are energy flowing through a neural network.” That’s it. And he means by that a certain kind of learning system.
Marcus: …there’s both mysticism and confusion in what Sam is saying..it’s true that you are, in some sense, just this flow through a neural network. But that doesn’t mean that the neural network in you works anything like the neural networks that OpenAI has built..neural networks that OpenAI has built, first of all, are relatively unstructured. You have, like, 150 different brain areas that, in light of evolution and your genome, are very carefully structured together. It’s a much more sophisticated system than they’re using…
I think it’s mysticism to think that if we just make the systems that we have now bigger with more data, that we’re actually going to get to general intelligence. There’s an idea called, “scale is all you need.”..There’s no law of the universe that says as you make a neural network larger, that you’re inherently going to make it more and more humanlike. There’s some things that you get, so you get better and better approximations to the sound of language, to the sequence of words. But we’re not actually making that much progress on truth…these neural network models that we have right now are not reliable and they’re not truthful…just because you make them bigger doesn’t mean you solve that problem.
Some things get better as we make these neural network models, and some don’t. The reason that some don’t, in particular reliability and truthfulness, is because these systems don’t have those models of the world. They’re just looking, basically, at autocomplete. They’re just trying to autocomplete our sentences. And that’s not the depth that we need to actually get to what people call A.G.I., or artificial general intelligence.
Klein: from Harry Frankfurt paper called “On Bullshit”…“The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”…his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. ..bullshit just has no relationship, really, to the truth…what unnerves me a bit about ChatGPT is the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all.
…systems like these pose a real and imminent threat to the fabric of society…You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.
We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.
Russian trolls spent something like a million dollars a month during the 2016 election… they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.
…if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.
The Klein/Marcu conversation then moves through several further areas. How large language models can be used to craft responses that nudge users towards clicking on advertising links, the declining returns of bigger models that are not helping in comprehending larger pieces of text, the use of ‘woke’ guardrails that yield pablum as answers to reasonable questions, lack of progress in determining trustworthiness of neural network responses, the eventual possible fusion of neural network, symbol processing and rule generating systems, the numerous hurdles to be overcome before an artificial general intelligence remotely equivalent to ours is constructed.

Monday, March 06, 2023

MindBlog as a portal to Open AI's ChatGPT? A few stumbles on the way...

After my initial foray into playing with OpenAI's ChatGPT last December I decided to return for another chat today, and was initially blown away by the results. After the exchange shown in the following screen shot, I put a link in the right column of MindBlog (look under Dynamic Views of MindBlog) that makes it a portal to the ChatBot. After doing this I clicked on the link and discovered it is bullshit...chatgps.com does not go to OpenAI but to a site that wants to sell that domain name for 20K! So I changed the URL to actually go to Open AI.  The second screen shot below shows ChatGPT's reponse to being asked to make up a story about Deric Bownds at the University of Wisconsin.

And here is a fairly detailed and plausible story describing my (nonexistent) breakthrough discovery about memory (I worked on vision)  after enrolling at the University of Wisconsin (I went to Harvard). To be fair, I did ask it to 'make up' a story. :



Friday, March 03, 2023

Empathy lost and regained in a mouse model of dementia

A PNAS Journal Club article by Carolyn Beans points to work by Yao and colleagues that shows that a loss of empathy that is especially problematic for those experiencing frontotemporal dementia (FTD...a rare condition that often develops earlier in life than other types of dementia) can be linked to slowed activity in a particular brain region of a mouse model of FTD. When Yao and colleagues experimentally increased brain activity, empathy returned. Here is the technical abstract of Yao and collaborators:

 Highlights

• Mice display dmPFC-dependent emotional contagion and other-directed consolation
• Emotional contagion and other-directed consolation are blunted in aged c9FTD mice
• Aged c9FTD mice exhibit reduced pyramidal neuron excitability in the dmPFC
• Enhancing dmPFC activity rescues empathy loss in aged c9FTD mice
Summary
Empathic function is essential for the well-being of social species. Empathy loss is associated with various brain disorders and represents arguably the most distressing feature of frontotemporal dementia (FTD), a leading form of presenile dementia. The neural mechanisms are unknown. We established an FTD mouse model deficient in empathy and observed that aged somatic transgenic mice expressing GGGGCC repeat expansions in C9orf72, a common genetic cause of FTD, exhibited blunted affect sharing and failed to console distressed conspecifics by affiliative contact. Distress-induced consoling behavior activated the dorsomedial prefrontal cortex (dmPFC), which developed profound pyramidal neuron hypoexcitability in aged mutant mice. Optogenetic dmPFC inhibition attenuated affect sharing and other-directed consolation in wild-type mice, whereas chemogenetically enhancing dmPFC excitability rescued empathy deficits in mutant mice, even at advanced ages when substantial cortical atrophy had occurred. These results establish cortical hypoexcitability as a pathophysiological basis of empathy loss in FTD and suggest a therapeutic strategy.

Wednesday, March 01, 2023

Artificial intelligence and personhood

MindBlog hesitates to add to the feeding frenzy of articles about LLMs (large language models) such as Open AI’s ChatGPT and Microsoft Bing’s “Sydney,” but I want to pass on clips from a fascinating episode of Venkatesh Rao’s “Mediocre Computing” series, that suffers from logorrhea and could use some ruthless editing, but has some searing points to make, which I largely agree with. He starts by posing A.I. as another Copernican moment:
…stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind? In case you weren’t keeping track, here’s the current Copernican Moments list:
The Earth goes around the Sun,
Natural selection rather than God created life,
Time and space are relative,
Everything is Heisenberg-uncertain
“Life” is just DNA’s way of making more DNA,
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
…..at a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting…We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
…there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs (Large Language Models) to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.

Monday, February 27, 2023

Possible mechanism of psychedelic therapeutic effects

From the latest issue of Science Magazine:  

The mechanism underlying psychedelic action

Psychedelic compounds promote cortical structural and functional neuroplasticity through the activation of serotonin 2A receptors. However, the mechanisms by which receptor activation leads to changes in neuronal growth are still poorly defined. Vargas et al. found that activation of intracellular serotonin 2A receptors is responsible for the plasticity-promoting and antidepressant-like properties of psychedelic compounds, but serotonin may not be the natural ligand for those intracellular receptors (see the Perspective by Hess and Gould). —PRS
Abstract
Decreased dendritic spine density in the cortex is a hallmark of several neuropsychiatric diseases, and the ability to promote cortical neuron growth has been hypothesized to underlie the rapid and sustained therapeutic effects of psychedelics. Activation of 5-hydroxytryptamine (serotonin) 2A receptors (5-HT2ARs) is essential for psychedelic-induced cortical plasticity, but it is currently unclear why some 5-HT2AR agonists promote neuroplasticity, whereas others do not. We used molecular and genetic tools to demonstrate that intracellular 5-HT2ARs mediate the plasticity-promoting properties of psychedelics; these results explain why serotonin does not engage similar plasticity mechanisms. This work emphasizes the role of location bias in 5-HT2AR signaling, identifies intracellular 5-HT2ARs as a therapeutic target, and raises the intriguing possibility that serotonin might not be the endogenous ligand for intracellular 5-HT2ARs in the cortex.

Wednesday, February 22, 2023

A fish passes the mirror self recognition test!

Our human abilities continue to found in more evolutionarily distant species. From Kohda et al.:
Some animals have the remarkable capacity for mirror self-recognition (MSR), yet any implications for self-awareness remain uncertain and controversial. This is largely because explicit tests of the two potential mechanisms underlying MSR are still lacking: mental image of the self and kinesthetic visual matching. Here, we test the hypothesis that MSR ability in cleaner fish, Labroides dimidiatus, is associated with a mental image of the self, in particular the self-face, like in humans. Mirror-naive fish initially attacked photograph models of both themselves and unfamiliar strangers. In contrast, after all fish had passed the mirror mark test, fish did not attack their own (motionless) images, but still frequently attacked those of unfamiliar individuals. When fish were exposed to composite photographs, the self-face/unfamiliar body were not attacked, but photographs of unfamiliar face/self-body were attacked, demonstrating that cleaner fish with MSR capacity recognize their own facial characteristics in photographs. Additionally, when presented with self-photographs with a mark placed on the throat, unmarked mirror-experienced cleaner fish demonstrated throat-scraping behaviors. When combined, our results provide clear evidence that cleaner fish recognize themselves in photographs and that the likely mechanism for MSR is associated with a mental image of the self-face, not a kinesthetic visual-matching model. Humans are also capable of having a mental image of the self-face, which is considered an example of private self-awareness. We demonstrate that combining mirror test experiments with photographs has enormous potential to further our understanding of the evolution of cognitive processes and private self-awareness across nonhuman animals.

Monday, February 20, 2023

Fundamentally rethinking what a mind is and how a brain works.

The February Issue of Trends in Cognitive Science has an open source Opinions article from Lisa Feldman Barrett and collaborators that suggests that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article. Finally, I pass on the concluding remarks on fundamentally rethinking what a mind is and how a brain works.
Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.

In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science: 

 (1) The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary). 

 (2) The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs. 

 (3) The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category. 

 These three assumptions are rooted in a typological view of the mind, brain, and behavior [1. ] that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.

 

Concluding remarks

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

Friday, February 17, 2023

The touch sensitive nerve cells that make mice (and probably us) horny.

From Elias et al. (open source) in the journal Cell:  

Highlights

• Activation of Mrgprb4-lineage touch neurons induces lordosis-like posture
• Activation of Mrgprb4-lineage touch neurons is rewarding
• Mrgprb4-lineage touch neurons are required for female sexual receptivity
• Mrgprb4-lineage touch neurons engage dopaminergic neurons during social behavior
Summary
Pleasurable touch is paramount during social behavior, including sexual encounters. However, the identity and precise role of sensory neurons that transduce sexual touch remain unknown. A population of sensory neurons labeled by developmental expression of the G protein-coupled receptor Mrgprb4 detects mechanical stimulation in mice. Here, we study the social relevance of Mrgprb4-lineage neurons and reveal that these neurons are required for sexual receptivity and sufficient to induce dopamine release in the brain. Even in social isolation, optogenetic stimulation of Mrgprb4-lineage neurons through the back skin is sufficient to induce a conditioned place preference and a striking dorsiflexion resembling the lordotic copulatory posture. In the absence of Mrgprb4-lineage neurons, female mice no longer find male mounts rewarding: sexual receptivity is supplanted by aggression and a coincident decline in dopamine release in the nucleus accumbens. Together, these findings establish that Mrgprb4-lineage neurons initiate a skin-to-brain circuit encoding the rewarding quality of social touch.

Wednesday, February 15, 2023

A.I. as a path towards mass lunacy

MindBlog wants to be part of the passing on of this elegantly stated clip from an article by novelist, literary critic, and essayist Walter Kirn:
What chatbots do is scrape the web, the library of texts already written, and learn from it how to add to the collection, which causes them to start scraping their own work in ever enlarging quantities, along with the texts produced by future humans. Both sets of documents will then degenerate. For as the adoption of A.I. relieves people of their verbal and mental powers and pushes them toward an echoing conformity, much as the mass adoption of map apps have abolished their senses of direction, the human writings from which the A.I. draws will decline in originality and quality along with their derivatives. Enmeshed, dependent, mutually enslaved, machine and man will unite their special weaknesses — lack of feeling and lack of sense — and spawn a thing of perfect lunacy, like the child of a psychopath and an idiot.
I can hear the objections to this dire scenario of a million gung-ho programmers as well as the ambitious A.I. itself, but I, a creative writer, am wed to it. I think dramatically first and scientifically second, such is my art. My ancient and possibly endangered art is imagining worst cases and playing them out to their bitter, tragic ends, as Sophocles did when he posited a king who unwittingly killed his father, married his mother, and then launched an inquiry into the matter after vowing to slay the perpetrator. See? See what writers were capable of then?
Now we have ‘Ant-Man.’ And worse, ‘Ant-Man’ sequels, enhanced by C.G.I.

Monday, February 13, 2023

An fMRI marker of drug and food craving

Koban et al. identify an fMRI-based neural signature of craving that is common to both food and drugs, predicts self-reported craving, distinguishes drug users from non-users, and tracks the efficacy of a cognitive therapy technique to reduce craving:
Craving is a core feature of substance use disorders. It is a strong predictor of substance use and relapse and is linked to overeating, gambling, and other maladaptive behaviors. Craving is measured via self-report, which is limited by introspective access and sociocultural contexts. Neurobiological markers of craving are both needed and lacking, and it remains unclear whether craving for drugs and food involve similar mechanisms. Across three functional magnetic resonance imaging studies (n = 99), we used machine learning to identify a cross-validated neuromarker that predicts self-reported intensity of cue-induced drug and food craving (P < 0.0002). This pattern, which we term the Neurobiological Craving Signature (NCS), includes ventromedial prefrontal and cingulate cortices, ventral striatum, temporal/parietal association areas, mediodorsal thalamus and cerebellum. Importantly, NCS responses to drug versus food cues discriminate drug users versus non-users with 82% accuracy. The NCS is also modulated by a self-regulation strategy. Transfer between separate neuromarkers for drug and food craving suggests shared neurobiological mechanisms. Future studies can assess the discriminant and convergent validity of the NCS and test whether it responds to clinical interventions and predicts long-term clinical outcomes.

Friday, February 10, 2023

Multigenerational Impacts of Childhood Access to the Safety Net.

In an open source article in the January issue of the American Economic Review East et al. show that early life exposure to Medicaid enhances the next generation's health.
We examine multigenerational impacts of positive in utero health interventions using a new research design that exploits sharp increases in prenatal Medicaid eligibility that occurred in some states. Our analyses are based on US Vital Statistics natality files, which enables linkages between individuals' early life Medicaid exposure and the next generation's health at birth. We find evidence that the health benefits associated with treated generations' early life program exposure extend to later offspring. Our results suggest that the returns on early life health investments may be substantively underestimated.

Wednesday, February 08, 2023

Shifting from foraging to farming, beginning ~12,000 years ago, changed everything.

A special Feature section in the Jan. 17 issue of PNAS offers a series of perspectives on the past 12,000 years of human behavior, adaptation, and evolution that shaped who we are today. An introduction to the special section by Larsen does an overview and brief summary of each of perspectives presented. I pass on the last paragraph (Conclusions) of that summary:
In evolutionary terms, the transition at the Pleistocene–Holocene boundary was extraordinary, especially in consideration of the beginning of a fundamental shift in dietary focus and the downstream effects of diets based on domesticated plants and animals. The transition provided the context for a remarkable increase in population. However, the costs for that success—elevated levels infectious diseases, undernutrition, and conflict—are still with us today. Our species will continue to adapt, to develop strategies for success, and to mitigate challenges. That is what we do. Once we began the shift to and intensification of farming, the remarkable changes seen in humans became critically important developments in recent human evolution. In view of conditions today, including climate change, overpopulation, and the rise in prevalence of infectious diseases, both old and newly emerging, it should come as no surprise that dependence on a few staple crops and shift to sedentary behavior will be with us for the foreseeable future. They are, after all, a legacy of our past, and forming and sharing of the dietary framework, behavioral patterns, and outcomes in health and well-being for all eight billion of us that occupy the world today.

Monday, February 06, 2023

Openness to spiritual experiences...with caution

After a Protestant Christian upbringing (I was a teenage organist in an Austin Lutheran Church, and took a course from theologian Paul Tillich at Harvard), my adult materialistic scientific Deric has never been able to fathom how an intellectual like Ross Douthat could be a devout Catholic. My irrational faith is in a materialism that is open to spiritual experiences and insights, but also strives to explain them in materialistic terms (as I think near-death experiences have been). I think Douthat’s recent opinion piece in the NYTimes very lucid, although I take exception to one of his pronouncements, and I would recommend that you read it. Here are some clips:
...the dissolution of the old order of American religion — the decline of churches and denominations and the rise of deinstitutionalized spirituality — means that more and more religious lives are lived in between worldviews, in experimental territory where it’s a mistake to expect coherence, theological consistency, a definite set of prior assumptions or beliefs...I want to defend the rationality of this kind of spiritual experimentation and then to warn about its dangers.
Douthat then offers three examples experimental style: magical thinking, experimenting with psychedelics, and pantheistic art that blurs spiritual traditions. And he continues:
For the stringent materialist, everything I’ve just described is reasonable as long as it's understood to be playacting, experience hunting, artistic experimentation. Only when it becomes serious does it offend against rationality.
However, stringent materialism is itself a weird late-modern superstition, and the kind of experimentation I’m describing is actually far more rational than a life lived as though the universe is random and indifferent and human beings are gene-transmission machines with an illusion of self-consciousness.
So... put me in the camp of irrational believers in stringent materialism. And... by what authority does Mr. Douthat get to declare spiritual experimentation or superstition is "far more rational than life lived as though the universe is random, etc." Superstition is superstition; irrational is irrational. What criteria are Mr. Douthat using for his "far more rational" judgment. Are they utilitarian?... as in "X diminishes or enhances the well being of humans more than Y"? He should explicitly state them.

Friday, February 03, 2023

Cryptocurrency isn't going away - any more that Gold is...

...as long as cryptocurrency maintains a core of true believers in the block chain concept for guaranteeing trustworthiness. All currencies are useful as tokens for exchange only to the extent that humans believe in them. The point of this brief post is just to pass on links to two recent screeds by Desai and Krugman that make entertaining reading: 

Mihir A. Desai - The Crypto Collapse and the End of the Magical Thinking That Infected Capitalism  

Paul Krugman - Wonking Out: Give Me That Gold Time Religion 

 

 

 

Wednesday, February 01, 2023

Market exposure and human morality

In the face of resurgent threats to place religion over the rest of civil society, and specious claims that we can't have a moral society without the (varying) moral injunctions of various religions, this recent article by Enke in Nature Human Behavior offers a nice explication of one of several other routes by which moral behaviors have evolved over time. Here is the abstract:
According to evolutionary theories, markets may foster an internalized and universalist prosociality because it supports market-based cooperation. This paper uses the cultural folklore of 943 pre-industrial ethnolinguistic groups to show that a society’s degree of market interactions, proxied by the presence of intercommunity trade and money, is associated with the cultural salience of (1) prosocial behaviour, (2) interpersonal trust, (3) universalist moral values and (4) moral emotions of guilt, shame and anger. To provide tentative evidence that a part of this correlation reflects a causal effect of market interactions, the analysis leverages both fine-grained geographic variation across neighbouring historical societies and plausibly exogenous variation in the presence of markets that arises through proximity to historical trade routes or the local degree of ecological diversity. The results suggest that the coevolutionary process involving markets and morality partly consists of economic markets shaping a moral system of a universalist and internalized prosociality.

Monday, January 30, 2023

How sleep shapes what we remember and forget.

I have found monitoring the quality of my sleep to be a fascinating and useful activity. I use both the Oura ring and Apple watch to monitor body temperature, body movement, heart rate, and heart rate variability, and then compare their different (but broadly similar) algorithmic estimates of deep sleep, REM sleep, non-REM sleep, and wake periods. I'm on the lookout for articles on sleep during my scans of journals' tables of contents, and have come upon this review by Sakai of what is happening in our sleep to be concise and useful. Below is a more general overview from edited and rearranged clips (The article goes into more electrophysiological and cellular details):

 
(image credit Dave Cutler)

...memory at the beginning of the consolidation process is very much anchored in hippocampal networks, and in the end of this process, it primarily resides in neocortical networks...New memories are rich with contextual clues such as the time, place, and sensory details of an experience...as memories get encoded in the cortex, many of those spatial and temporal details fade...forgetting—through weakening or loss of synapses—seems to play a key role in the process of memory consolidation, especially during sleep... What remains are the elements representing the essential core of the memory. When learning how to drive, for example, the movements needed to steer and brake are critical; the details of avoiding a specific car on a particular outing are not...sleep’s role in memory is not simply about passive storage. Rather..a more active process of consolidation that extracts key information and forms a generalized version of the overall memory that can later be accessed and applied to relevant situations.
Sleep in mammals has distinct phases as characterized by specific EEG patterns. These include brain-wide slow oscillations (less than 1 Hz in frequency), sharp-wave ripples (100-300 Hz) in the hippocampus, and spindles (10-15 Hz), which are related to the firing of neurons in the circuits connecting the thalamus and the cortex. Upon onset of sleep, the brain enters a non-rapid eye movement (non-REM) phase. During non-REM sleep, slow oscillations sweep across large regions of the brain, punctuated by swells of spindles and bursts of sharp wave-ripples. A period of rapid eye movement (REM) sleep follows, with characteristic bursts of its namesake eye movements and low-amplitude theta oscillations around 4-8 Hz. The brain cycles through these phases throughout the sleep period, with slow-wave, non-REM sleep dominating the early hours and REM sleep the late hours.
There are...
...distinct roles of different stages of sleep in memory formation. Non-REM sleep has been shown to be very important for consolidation of declarative memories—those based on recall of information—while REM sleep seems to play a larger part in procedural or task-based memories...this may relate to the degree of synaptic change required. For declarative memories, most of the foundational learning has already taken place; remembering a new fact likely requires only small changes in synaptic strengths. By contrast, procedural memories require a massive amount of synaptic change...If you want to learn how to ride a bike, or how to play capoeira … it's not like learning a new name...I’s weeks, months, years of work. And so it seems like REM sleep is really, really necessary to do this long-term persistent synaptic change.”


Friday, January 27, 2023

MindBlog's 2010 recital and lecture in Istanbul

I've been meandering back over old MindBlog posts and found an orphaned .mov file that I've reposted to my YouTube channel, quite a bit of nostalgia. The guy turning my pages was my handler and guide during my 10 day visit to Istanbul. He is now a Turkish Airlines pilot. 

 

Wednesday, January 25, 2023

The Evolution of Peace

I pass on the abstract of an article by Luke Glowacki that has been submitted to the network of Behavioral and Brain Science reviewers who might offer commentary on its arguments. Motivated readers can obtain a copy of the article from me.
Abstract: While some species have affiliative and even cooperative interactions between individuals of different social groups, humans are alone in having durable, positive-sum, interdependent relationships across unrelated social groups. Our capacity to have harmonious relationships that cross group boundaries is an important aspect of our species' success, allowing for the exchange of ideas, materials, and ultimately enabling cumulative cultural evolution. Knowledge about the conditions required for peaceful intergroup relationships is critical for understanding the success of our species and building a more peaceful world. How do humans create harmonious relationships across group boundaries and when did this capacity emerge in the human lineage? Answering these questions involves considering the costs and benefits of intergroup cooperation and aggression, for oneself, one's group, and one's neighbor. Taking a game theoretical perspective provides new insights into the difficulties of removing the threat of war and reveals an ironic logic to peace—the factors that enable peace also facilitate the increased scale and destructiveness of conflict. In what follows, I explore the conditions required for peace, why they are so difficult to achieve, and when we expect peace to have emerged in the human lineage. I argue that intergroup cooperation was an important component of human relationships and a selective force in our species history in the past 300 thousand years. But the preconditions for peace only emerged in the past 100 thousand years and likely coexisted with intermittent intergroup violence which would have also been an important and selective force in our species' history.

Monday, January 23, 2023

Our different styles of thinking.

An interesting recent article by Joshua Rothman, the ideas editor of newyorker.com, notes several recent books that describe different styles of thinking. A few clips:
In “Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions,” Temple Grandin identifies a continuum of thought styles that’s roughly divisible into three sections. On one end are verbal thinkers, who often solve problems by talking about them in their heads or, more generally, by proceeding in the linear, representational fashion typical of language. (Estimating the cost of a building project, a verbal thinker might price out all the components, then sum them using a spreadsheet—an ordered, symbol-based approach.) On the other end of the continuum are “object visualizers”: they come to conclusions through the use of concrete, photograph-like mental images, as Grandin does when she compares building plans in her mind. In between those poles, Grandin writes, is a second group of visual thinkers—“spatial visualizers,” who seem to combine language and image, thinking in terms of visual patterns and abstractions.
Grandin proposes imagining a church steeple. Verbal people, she finds, often make a hash of this task, conjuring something like “two vague lines in an inverted V,” almost as though they’ve never seen a steeple before. Object visualizers, by contrast, describe specific steeples that they’ve observed on actual churches: they “might as well be staring at a photograph or photorealistic drawing” in their minds. Meanwhile, the spatial visualizers picture a kind of perfect but abstract steeple—“a generic New England-style steeple, an image they piece together from churches they’ve seen.” They have noticed patterns among church steeples, and they imagine the pattern, rather than any particular instance of it.
The imagistic minds in “Visual Thinking” can seem glamorous compared with the verbal ones depicted in “Chatter: The Voice in Our Head, Why It Matters, and How to Harness It,” by Ethan Kross. Kross is interested in what’s known as the phonological loop—a neural system, consisting of an “inner ear” and an “inner voice,” that serves as a “clearinghouse for everything related to words that occurs around us in the present.” If Grandin’s visual thinkers are attending Cirque du Soleil, then Kross’s verbal thinkers are stuck at an Off Broadway one-man show. It’s just one long monologue.
People with inner monologues, Kross reports, often spend “a considerable amount of time thinking about themselves, their minds gravitating toward their own experiences, emotions, desires, and needs.” This self-centeredness can spill over into our out-loud conversation. In the nineteen-eighties, the psychologist Bernard Rimé investigated what we’d now call venting—the compulsive sharing of negative thoughts with other people. Rimé found that bad experiences can inspire not only interior rumination but the urge to broadcast it. The more we share our unhappiness with others, the more we alienate them… Maybe it can pay to keep your thoughts to yourself.
Kross’s bottom line is that our inner voices are powerful tools that must be tamed. He ends his book with several dozen techniques for controlling our chatter. He advises trying “distanced self-talk”: by using “your name and the second-person ‘you’ to refer to yourself,” he writes, you can gain more command over your thinking. You might use your inner voice to pretend that you’re advising a friend about his problems; you might redirect your thoughts toward how universal your experiences are (It’s normal to feel this way), or contemplate how every new experience is a challenge you can overcome (I have to learn to trust my partner). The idea is to manage the voice that you use for self-management. Take advantage of the suppleness of dialogue. Don’t just rehearse the same old scripts; send some notes to the writers’ room.
If we can’t say exactly how we think, then how well do we know ourselves? In an essay titled “The Self as a Center of Narrative Gravity,” the philosopher Daniel Dennett argued that a layer of fiction is woven into what it is to be human. In a sense, fiction is flawed: it’s not true. But, when we open a novel, we don’t hurl it to the ground in disgust, declaring that it’s all made-up nonsense; we understand that being made up is actually the point. Fiction, Dennett writes, has a deliberately “indeterminate” status: it’s true, but only on its own terms. The same goes for our minds. We have all sorts of inner experiences, and we live through and describe them in different ways—telling one another about our dreams, recalling our thoughts, and so on. Are our descriptions and experiences true or fictionalized? Does it matter? It’s all part of the story.

Friday, January 20, 2023

A quick MindBlog riff on what a self is....

Spilling out what I was thinking at breakfast this morning, deciding to fire it off, probably incomprehensible to most readers, perhaps to attempt to clarify later (or delete!):

My self model or 'I' stands amongst my models of others, and during my early postnatal period I probably formed those other models prior to my own, a yet unknown self discovering and learning to predict the behavior of others to gain feeding and care, and only then composing my own self from parts of them. This is consonant with Graziano's take on consciousness as a perceptual construct ("Consciousness and the Social Brain"), also with Wegner's self as being the emotion of authorship ("The Illusion of Conscious Will") and with Metzenger's emotions as evolved virtual organs analogous to the hardware of the liver or kidney ("The Ego Tunnel"). Perhaps the closest we come to a 'real fundamental self' is the experience of occupancy of an expanded space (of non-dual consciousness) that feels to be the container for all of this.

Wednesday, January 18, 2023

Mindful attention enhances brain network control and uncouples past from the present

Zhou et al. (open source) do an interesting experiment on mindfulness and brain network control:  

Significance

Practicing mindfulness helps individuals regulate attention, thoughts, feelings, and behavior. In recognizing these benefits, various schools, workplaces, and clinics are increasingly teaching mindfulness. How does mindful attention change brain function to support self-regulation? Addressing this question could inform how we teach mindfulness and whom we expect to benefit. We modeled the defining components of mindful experience using tools that probe the structure and function of the brain’s network. In a randomized controlled study of alcohol consumption, we found that a brain network’s dynamic shape predicts individuals’ future alcohol consumption and explains otherwise elusive components of mindful experience, such as being present. Our results provide new understanding of how mindful attention affects brain function.
Abstract
Mindful attention is characterized by acknowledging the present experience as a transient mental event. Early stages of mindfulness practice may require greater neural effort for later efficiency. Early effort may self-regulate behavior and focalize the present, but this understanding lacks a computational explanation. Here we used network control theory as a model of how external control inputs—operationalizing effort—distribute changes in neural activity evoked during mindful attention across the white matter network. We hypothesized that individuals with greater network controllability, thereby efficiently distributing control inputs, effectively self-regulate behavior. We further hypothesized that brain regions that utilize greater control input exhibit shorter intrinsic timescales of neural activity. Shorter timescales characterize quickly discontinuing past processing to focalize the present. We tested these hypotheses in a randomized controlled study that primed participants to either mindfully respond or naturally react to alcohol cues during fMRI and administered text reminders and measurements of alcohol consumption during 4 wk postscan. We found that participants with greater network controllability moderated alcohol consumption. Mindful regulation of alcohol cues, compared to one’s own natural reactions, reduced craving, but craving did not differ from the baseline group. Mindful regulation of alcohol cues, compared to the natural reactions of the baseline group, involved more-effortful control of neural dynamics across cognitive control and attention subnetworks. This effort persisted in the natural reactions of the mindful group compared to the baseline group. More-effortful neural states had shorter timescales than less effortful states, offering an explanation for how mindful attention promotes being present.

Monday, January 16, 2023

COVID-19 and brain aging

Over the Christmas and New Year's holidays I was hit first by a mild Covid infection that lasted only a few days (I've had all 5 vaccinations and immediately took Paxlovid on testing positive) and then five days later had a slightly longer Paxlovid-rebound infection. A transient brain fog seems to have cleared by now. This personal experience makes me especially attentive to articles like Welberg's note on Covid-19 and brain aging, which suggests that brain changes associated with Covid infection are most likely due to neuroinflammation resulting from the infection, not from the virus itself. Here is the abstract:
Severe COVID-19 has been associated with cognitive impairment and changes in the frontal cortex. In a study published in Nature Aging, Mavrikaki, Lee et al. performed RNA sequencing on frontal cortex samples from 21 individuals with severe COVID-19, 22 age- and sex-matched uninfected controls, and 9 uninfected people who had received intensive care or ventilator treatment. The authors found almost 7,000 differentially expressed genes (DEGs) in the patient samples compared to controls. Upregulated DEGs were enriched for genes involved in immune-related pathways, and downregulated DEGs were enriched for genes involved in synaptic activity, cognition and memory — a profile of transcriptional changes that resembles those previously observed in aging brains. Direct comparisons between frontal cortex samples from young and old individuals confirmed this overlap. Application of tumor necrosis factor, interferon-β or interferon-γ to cultured human primary neurons induced transcriptional changes similar to those seen in patients with severe COVID-19. As no SARS-CoV-2 RNA was detected in the patient samples, these data suggest that the transcriptomic changes in frontal cortex of patients with severe COVID-19 were due to neuroinflammatory processes rather than a direct effect of the virus.

Friday, January 13, 2023

Materialism meets transcendence

I want to pass on the URL to a PBS series by Alan Lightman that I plan to start watching as soon as I can, based on the following description and review in Science Magazine:
In part 1, “The Stars & The Osprey,” Lightman undergoes functional magnetic resonance imaging and interviews neuroscientist Robert Desimone about how much neuroimaging can tell us about Lightman’s transcendental experience. He ultimately finds this approach unsatisfying and introduces viewers to the debate between mechanists, who believe that the laws of physics, chemistry, and biology are sufficient to explain life, and vitalists, who believe that living creatures are imbued with an additional spiritual quality not explainable by science.
Here, he interviews biologist and Nobel laureate Jack Szostak and the Dalai Lama as proponents of these two camps, respectively. Although the Dalai Lama expresses enthusiasm for scientific investigation, most scientists will likely resonate with Szostak’s declaration that “It’s not just atoms and molecules, it’s the organization…it’s no less wonderful or beautiful because we understand that there is a natural origin for [life].”
Part 2, “The Big & The Small,” begins with the familiar “powers of 10” view of the Universe, moving from the subatomic to the galactic. Lightman then queries what such explorations have to do with consciousness, conversing with BINA48, an extraordinary humanoid robot programmed by artificial intelligence with >100 hours of a real woman’s memories. Here, he speculates that BINAs of the future may achieve consciousness. He then probes this issue over Zoom with the Dalai Lama, whom viewers observe watching a movie of BINA48 conversing with the woman from whom BINA48 was programmed; this multilayered interaction is simultaneously disconcerting, comical, and wondrous. After additional interviews with a bioethicist, a rabbi, and others, Lightman ultimately concludes that we may be just atoms and molecules, but, in the words of Emily Dickinson, “The brain is wider than the sky.”
In part 3, “Homo Techno,” Lightman contemplates our inner world of perception, consciousness, and self-awareness. He meets Erik Sorto, who lost all movement from the neck down after being shot in the back. With electrodes implanted into his posterior parietal cortex and 2 years of training, Sorto learned to control the movement of a robotic arm by thought. This is an extraordinary achievement, and Lightman posits that it is an example of the beginning of our transition from Homo sapiens into Homo techno, part-human, part-machine entities that reflect the modification of human evolution by technological means. From an actual evolutionary biology standpoint, this is nonsensical, and it is unclear that Lightman even means to propose such an idea, but this section’s loose language will rankle some viewers.
At another point in the series’ final episode, Lightman finds himself dizzy from talk of neurons and galaxies and takes refuge in closely examining a single square inch of earth. His biophilia is obviously meaningful to him, and it would have been stimulating had the episode included interviews with an evolutionary biologist or naturalist, who might have helped to better articulate this facet of the human experience. The series closes, appropriately, with philosophical musings about the need for each of us to find meaning for ourselves.
A small problem at the outset is the inherent impossibility of conveying transcendence through description—the degree to which viewers relate to Lightman’s moment of enlightenment will depend on their own experiences and inclinations. Additionally, Lightman’s screen persona leaves something to be desired. There are, however, few people better qualified to explore these issues, and as the series progresses, his humanity shines through, bringing a welcome lightness to some potentially ponderous material.
Despite its focus on phenomena currently unexplainable by science, Searching is full of the joy and passion that can be found in the doing of science and succeeds in conveying how deeply meaningful science is to its practitioners. It is well worth your time and is especially recommended to families with kids curious about life and our world.

Wednesday, January 11, 2023

Awe as a Pathway to Mental and Physical Health

Reading this open source review from Maria Monroy and Dacher Keltner leaves me feeling substantially more mellow! Their abstract, followed by a quote from Emerson, and then a summary graphic...
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes—shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning—that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
and,
In the woods, we return to reason and faith. There I feel that nothing can befall me in life—no disgrace, no calamity (leaving me my eyes), which nature cannot repair. Standing on the bare ground—my head bathed by the blithe air and uplifted into infinite space—all mean egotism vanishes. I become a transparent eyeball; I am nothing; I see all; the currents of the Universal Being circulate through me; I am part or parcel of God. The name of the nearest friend sounds then foreign and accidental; to be brothers, to be acquaintances, master or servant, is then a trifle and a disturbance. I am the lover of uncontained and immortal beauty.
-from Emerson R. W. (1836). Nature. Reprinted in Ralph Waldo Emerson, Nature and Other Essays (2009). Dover.
Fig. 1. Model for awe as a pathway to mental and physical health. This model shows that awe experiences will lead to the mediators that will lead to better mental and physical-health outcomes. Note that the relationships between awe experiences and mediators, and mediators and outcomes have been empirically identified; the entire pathways have only recently begun to be tested. One-headed arrows suggest directional relationships, and two-headed arrows suggest bidirectionality. DMN = default-mode network; PTSD = posttraumatic stress disorder.

Monday, January 09, 2023

AI After Death: interactions with AI representations of the deceased

 I want to pass on to MindBlog readers the following excellent notes that Terry Allard made to guide a discussion at the Nov. 29, 2022 session of the Chaos & Complex Systems Discussion group at the Univ. of Wisconsin. 

Chaos & Complex Systems Discussion

AI After Death: interactions with AI representations of the deceased
November 29, 2022
Source material: Washington Post; Nov 12, 2022, by Caren Chesler: AI’s New Frontier
https://www.washingtonpost.com/health/2022/11/12/artificial-intelligence-grief/ See also https://www.media.mit.edu/projects/augmented-eternity/overview/

AI companies have begun mining digital content and real world interview to create AI representations of people with whom their survivors can interact.

The digital representations are created from social media posts, email, electronic surveillance, voice recordings and sometimes actual interviews with the targets before they pass away.

The interaction can be made directly with visual, audio or text avatars.

  • The documentary, “Meeting You,” created a digitized re-creation of a recently lost child that the mother could see through a virtual reality headset.

  • Augmented Eternities (MIT Media Lab) This project uses a distributed machine intelligence network to enable its users to control their growing digital footprint, turn it into their digital representation, and share it as a part of a social network.

    Our digital identity has become so rich and intrinsic that without it, it may feel like a part of us is missing. The number of sensors we carry daily and the digital footprints we leave behind have given us enough granular patterns and data clusters that we can now use them for prediction and reasoning on behalf of an individual. We believe that by enabling our digital identity to perpetuate, we can significantly contribute to global expertise and enable a new form of an intergenerational collective intelligence.

    https://www.media.mit.edu/projects/augmented-eternity/overview/

  • Amazon unveiled a new feature it’s developing for Alexa, in which the virtual assistant can read aloud stories in a deceased loved one’s voice

  • Several entrepreneurs in the AI sphere, including James Vlahos of HereAfter AI and Eugenia Kuyda, who co-founded AI start-ups Luka and Replika, have turned their efforts toward virtual representations of people, using data from their digital footprint to craft an avatar or chatbot that can interact with family members after they’ve passed.

    HereAfter’s app takes users through an interview process before they’ve died, prompting them to recollect stories and memories that are then recorded. After they’ve passed, family members can ask questions, and the app responds in the deceased’s voice using the accumulated interview information, almost like it’s engaging in a conversation.

    Some Questions for Discussion:

  1. How does posthumous interaction benefit the survivors? Are there risks? Could it lead to someone wanting to remain in this virtual world of their loved one?

  2. Could posthumous digital avatars have a therapeutic benefit for the grieving?

  3. Can digital avatars replace human interaction writ large?

  4. Can digital avatars learn and evolve on their own?

  5. Are digital avatars alive or could they be? How do we define sentience?

  6. Will “deep fakes” compromise trust in online person-to-person interactions?

  7. Can people download their identities into digital form and transcend (cheat) death?

 

Friday, January 06, 2023

Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty

Wow, in an article with title of this post Breznau et al. show that different research teams presented independently with the same data and social science hypothesis reach opposing conclusions about what the data show.  

Significance

Will different researchers converge on similar findings when analyzing the same data? Seventy-three independent research teams used identical cross-country survey data to test a prominent social science hypothesis: that more immigration will reduce public support for government provision of social policies. Instead of convergence, teams’ results varied greatly, ranging from large negative to large positive effects of immigration on social policy support. The choices made by the research teams in designing their statistical tests explain very little of this variation; a hidden universe of uncertainty remains. Considering this variation, scientists, especially those working with the complexities of human societies and behavior, should exercise humility and strive to better account for the uncertainty in their work.
Abstract
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.

Wednesday, January 04, 2023

A deep-learning model of prescient ideas demonstrates that they emerge from the periphery

 The abstract of a fascinating open source article from Vicinanza et al.:

Where do prescient ideas—those that initially challenge conventional assumptions but later achieve widespread acceptance—come from? Although their outcomes in the form of technical innovation are readily observed, the underlying ideas that eventually change the world are often obscured. Here we develop a novel method that uses deep learning to unearth the markers of prescient ideas from the language used by individuals and groups. Our language-based measure identifies prescient actors and documents that prevailing methods would fail to detect. Applying our model to corpora spanning the disparate worlds of politics, law, and business, we demonstrate that it reliably detects prescient ideas in each domain. Moreover, counter to many prevailing intuitions, prescient ideas emanate from each domain's periphery rather than its core. These findings suggest that the propensity to generate far-sighted ideas may be as much a property of contexts as of individuals.