Friday, March 29, 2024

How communication technology has enabled the corruption of our communication and culture .

I pass on two striking examples from today’s New York Times, with few clips of text from each:

A.I.-Generated Garbage Is Polluting Our Culture:

(You really should read the whole article...I've given up on trying to assemble clips of text that get across the whole message, and pass on these bits towards the end of the article:)

....we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality. And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.
Is Threads the Good Place?:

Once upon a time on social media, the nicest app of them all, Instagram, home to animal bloopers and filtered selfies, established a land called Threads, a hospitable alternative to the cursed X,..Threads would provide a new refuge. It would be Twitter But Nice, a Good Place where X’s liberal exiles could gather around for a free exchange of ideas and maybe even a bit of that 2012 Twitter magic — the goofy memes, the insider riffing, the meeting of new online friends

...And now, after a mere 10 months, we can see exactly what we built: a full-on bizarro-world X, handcrafted for the left end of the political spectrum, complete with what one user astutely labeled “a cult type vibe.” If progressives and liberals were provoked by Trumpers and Breitbart types on Twitter, on Threads they have the opportunity to be wounded by their own kind...Threads’ algorithm seems precision-tweaked to confront the user with posts devoted to whichever progressive position is slightly lefter-than-thou....There’s some kind of algorithm that’s dusting up the same kind of outrage that Twitter had.Threads feels like it’s splintering the left.

The fragmentation of social media may have been as inevitable as the fragmentation of broadcast media. Perhaps also inevitable, any social media app aiming to succeed financially must capitalize on the worst aspects of social behavior. And it may be that Hobbes, history’s cheery optimist, was right: “The condition of man is a condition of war of every one against every one.” Threads, it turns out, is just another battlefield.


Wednesday, March 27, 2024

Brain changes over our lifetime.

This video from The Economist is one of the best I have seen for a popular audience. Hopefully the basic facts presented are slowly seeping throughout our culture..

Monday, March 25, 2024

If you want to remember a landscape be sure to include a human....

Fascinating observations by Jimenez et al.  on our inherent human drive to understand our vastly social world...(and in the same issue of PNAS note this study on the importance of the social presence of either human or virtual instructors in multimedia instructional videos.)


Writer Kurt Vonnegut once said “if you describe a landscape or a seascape, or a cityscape, always be sure to include a human figure somewhere in the scene. Why? Because readers are human beings, mostly interested in other human beings.” Consistent with Vonnegut’s intuition, we found that the human brain prioritizes learning scenes including people, more so than scenes without people. Specifically, as soon as participants rested after viewing scenes with and without people, the dorsomedial prefrontal cortex of the brain’s default network immediately repeated the scenes with people during rest to promote social memory. The results add insight into the human bias to process the social landscape.


Sociality is a defining feature of the human experience: We rely on others to ensure survival and cooperate in complex social networks to thrive. Are there brain mechanisms that help ensure we quickly learn about our social world to optimally navigate it? We tested whether portions of the brain’s default network engage “by default” to quickly prioritize social learning during the memory consolidation process. To test this possibility, participants underwent functional MRI (fMRI) while viewing scenes from the documentary film, Samsara. This film shows footage of real people and places from around the world. We normed the footage to select scenes that differed along the dimension of sociality, while matched on valence, arousal, interestingness, and familiarity. During fMRI, participants watched the “social” and “nonsocial” scenes, completed a rest scan, and a surprise recognition memory test. Participants showed superior social (vs. nonsocial) memory performance, and the social memory advantage was associated with neural pattern reinstatement during rest in the dorsomedial prefrontal cortex (DMPFC), a key node of the default network. Moreover, it was during early rest that DMPFC social pattern reinstatement was greatest and predicted subsequent social memory performance most strongly, consistent with the “prioritization” account. Results simultaneously update 1) theories of memory consolidation, which have not addressed how social information may be prioritized in the learning process, and 2) understanding of default network function, which remains to be fully characterized. More broadly, the results underscore the inherent human drive to understand our vastly social world.




Wednesday, March 20, 2024

Fundamentally changing the nature of war.

I generally try to keep a distance from 'the real world' and apocalyptic visions of what AI might do, but I decided to pass on some clips from this technology essay in The Wall Street Journal that makes some very plausible predictions about the future of armed conflicts between political entities:

The future of warfare won’t be decided by weapons systems but by systems of weapons, and those systems will cost less. Many of them already exist, whether they’re the Shahed drones attacking shipping in the Gulf of Aden or the Switchblade drones destroying Russian tanks in the Donbas or smart seaborne mines around Taiwan. What doesn’t yet exist are the AI-directed systems that will allow a nation to take unmanned warfare to scale. But they’re coming.

At its core, AI is a technology based on pattern recognition. In military theory, the interplay between pattern recognition and decision-making is known as the OODA loop— observe, orient, decide, act. The OODA loop theory, developed in the 1950s by Air Force fighter pilot John Boyd, contends that the side in a conflict that can move through its OODA loop fastest will possess a decisive battlefield advantage.

For example, of the more than 150 drone attacks on U.S. forces since the Oct. 7 attacks, in all but one case the OODA loop used by our forces was sufficient to subvert the attack. Our warships and bases were able to observe the incoming drones, orient against the threat, decide to launch countermeasures and then act. Deployed in AI-directed swarms, however, the same drones could overwhelm any human-directed OODA loop. It’s impossible to launch thousands of autonomous drones piloted by individuals, but the computational capacity of AI makes such swarms a possibility.

This will transform warfare. The race won’t be for the best platforms but for the best AI directing those platforms. It’s a war of OODA loops, swarm versus swarm. The winning side will be the one that’s developed the AI-based decision-making that can outpace their adversary. Warfare is headed toward a brain-on-brain conflict.

The Department of Defense is already researching a “brain-computer interface,” which is a direct communications pathway between the brain and an AI. A recent study by the RAND Corporation examining how such an interface could “support human- machine decision-making” raised the myriad ethical concerns that exist when humans become the weakest link in the wartime decision-making chain. To avoid a nightmare future with battlefields populated by fully autonomous killer robots, the U.S. has insisted that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike.

But will our adversaries show similar restraint? Or would they be willing to remove the human to gain an edge on the battlefield? The first battles in this new age of warfare are only now being fought. It’s easy to imagine a future, however, where navies will cease to operate as fleets and will become schools of unmanned surface and submersible vessels, where air forces will stand down their squadrons and stand up their swarms, and where a conquering army will appear less like Alexander’s soldiers and more like a robotic infestation.

Much like the nuclear arms race of the last century, the AI arms race will define this current one. Whoever wins will possess a profound military advantage. Make no mistake, if placed in authoritarian hands, AI dominance will become a tool of conquest, just as Alexander expanded his empire with the new weapons and tactics of his age. The ancient historian Plutarch reminds us how that campaign ended: “When Alexander saw the breadth of his domain, he wept, for there were no more worlds to conquer.”

Elliot Ackerman and James Stavridis are the authors of “2054,” a novel that speculates about the role of AI in future conflicts, just published by Penguin Press. Ackerman, a Marine veteran, is the author of numerous books and a senior fellow at Yale’s Jackson School of Global Affairs. Admiral Stavridis, U.S. Navy (ret.), was the 16th Supreme Allied Commander of NATO and is a partner at the Carlyle Group.


Monday, March 18, 2024

The Physics of Non-duality

 I want to pass on this lucid posting by "Sean L" of Boston to the site:

The physics of nonduality

In this context I mean “nonduality” as it refers to there being no real subject-object separation or duality. One of the implications of physics that originally led me to investigate notions of “awakening” and “no-self” is the idea that there aren’t really any separate objects. We can form very self-consistent and useful concepts of objects (a car, an atom, a self, a city), but from the perspective of the universe itself such objects don’t actually exist as well-defined independent “things.” All that’s real is the universe as one giant, self-interacting, dynamic, but ultimately singular “thing.” If you try to partition off one part of the universe (a self, a galaxy) from the rest, you’ll find that you can’t actually do so in a physically meaningful way (and certainly not one that persists over time). All parts of the universe constantly interact with their local environment, exchanging mass and energy. Objectively, physics says that all points in spacetime are characterized by values of different types of fields and that’s all there is to it. Analogy: you might see this word -->self<-- on your computer screen and think of it as an object, but really it’s just a pattern of independent adjacent pixel values that you’re mapping to a concept. There is no objectively physically real “thing” that is the word self (just a well defined and useful concept). 

This is akin to the idea that few if any of the cells making up your body now are the same as when you were younger. Or the idea that the exact pattern you consider to be “you” in this moment will not be numerically identical to what you consider “you” in the next picosecond. Or the idea that there is nothing that I could draw a closed physical boundary around that perfectly encloses the essence of “you” such that excluding anything inside that boundary means it would no longer contain “you” and including anything currently outside the boundary would mean it contains more than just “you.” This is true even if you try to draw the boundary around only your brain or just specific parts of your brain. I think this is a fun philosophical idea, but also one that often gets the response of “ok, yeah, sure I guess that’s all logically consistent,” but then still feels esoteric. It often feels like it’s just semantics, or splitting hairs, or somehow not something that really threatens the idea of identity or of a physical self. 

I was recently discussing with another WakingUp user that what made this notion far more visceral and convincing to me (enough to motivate me to go out in search of what “I” actually am, which has ultimately led me here) was realizing that even the very idea of trying to draw a boundary around a “thing” is objectively meaningless. So, I thought I’d share what I mean by that in case others find it interesting too :) !

Here are four pictures. The two on the left are pictures of very simple boundaries of varying thickness that one might try to draw around a singular “thing” (perhaps a self?) to demonstrate that it is indeed a well defined object. The two pictures on the right are of the exact same “boundaries” as on the left, but as they would be seen by a creature that evolved to perceive reality in the momentum basis. I’ll better explain what that means in a moment, but the key point is that the pictures on the left and right are (as far as physics or objective reality is concerned) exactly equivalent representations of the same part of reality. Both sets of pictures are perceptual “pointers” to the same part of the universe. You literally cannot say that one is a more veridical or more accurate depiction of reality than the other, because they are equivalent mathematical descriptions of the same underlying objective structure. Humans just happen to have a bias toward the left images.

So then… what could these “boundaries” be enclosing in the pictures on the right? I sure can’t tell. Nor do I think it even makes sense to ask the question! Our sense that there are discrete “objects” in the universe (including selves) seems intuitive when perceiving the universe as shown on the left (as we do). But when perceiving the exact same reality as shown on the right I find this belief very quickly breaks down. There simply is no singular, bounded, contained “thing” on the right. Anything that might at first appear on the left to be a separable object will be completely mixed up with and inseparable from its “surroundings” when viewed on the right, and vice-versa. The boundary itself clearly isn’t even a boundary. Boundaries are (very useful!) concepts, but they have no ultimate objective physical meaning.


Some technical details for those interested (ignore this unless interested):

You can think of a basis like a fancy coordinate system. Analogy: I can define the positions of all the billiard balls on a pool table by defining an XY coordinate system on the table and listing the numerical coordinates of each ball. But if I stretch and/or rotate that coordinate system then all the numbers representing those same positions of the balls will change. The balls themselves haven’t changed positions, but my coordinate system-dependent “perception” of the balls is totally different. They're different ways of perceiving the same fundamental structure (billiard ball positions), even though that structure itself exists independently of any coordinate system. The left and right images are analogous to different coordinate systems, but in a more fundamental way.


In quantum mechanics the correct description of reality is the wave function. For an entangled system of particles you don’t have a separate wave function for each particle. Instead, you have one multi-particle wave function for the whole system (strictly speaking one could argue the entire universe is most correctly described as a single giant and stupidly complicated wave function). There is no way to break up this wave function into separate single-particle wave functions (that’s what it means for the particles to be entangled). What that means is from the perspective of true reality, there literally aren’t separate distinct particles. That’s not just an interpretation – it’s a testable (falsifiable) statement of physical reality and one that has been extensively verified experimentally. So, if you think of yourself as a distinct object, or at least as a dynamical evolving system of interacting (but themselves distinct) particles, sorry, but that’s simply not how we understand reality to actually work :P .

However, to do anything useful we have to write down the wave function (e.g. so we can do calculations with it). We have to represent it mathematically. This requires choosing a basis in which to write it down, much like choosing a coordinate system with which to be able to write down the numerical positions of the billiard balls. A human-intuitive basis is the position basis, which is what’s shown in the left images. However, a completely equivalent way to write down the same wave function is in the momentum basis, which is what’s shown in the right images. There also exist many (really, infinite) other possible bases. Some bases will be more convenient than others depending on the type of calculation you’re trying to do. Ultimately, all bases are arbitrary and none are objectively real, because the universe doesn’t need to “write down” a wave function to compute it. The universe just is. To me, the equivalent representation of the same underlying reality in an infinite diversity of possible Hilbert Spaces (i.e. using different bases) much more viscerally drives home the point that there really are no separate objects (including selves). That’s not just philosophy! There’s just one objective reality (one thing, no duality) that can be perceived in an infinite variety of ways, each with different pros and cons. And our way of perceiving reality lends itself to concepts of separate things and objects.


There are other parts of physics I didn’t get into here that I think demonstrate that the true nature of the universe must be nondual (maybe to be discussed later). For example, the lack of room for free will or the indistinguishability of particles. If you actually read this whole post, thanks for your time and attention, and I hope you found it as interesting as I do!

Thursday, March 14, 2024

An inexpensive Helium Mobile 5G cellphone plan that pays you to use it?

This is a followup to the previous post describing my setting up a G5 hotspot on Helium’s decentralized 5G infrastructure that earns MOBILE tokens. The cash value of the MOBILE tokens earned since July 2022 is  ~7X the cost of the equipment needed to generate them.

Now I want to put down further facts I want to document for my future self and MindBlog’s techie readers.

Recently Helium has introduced Helium Mobile, a cell phone plan using using this new 5G infrastructure which costs $20/month - much less expensive than other cellular providers like Verizon and ATT.  It has partnered with T-Mobile to fill in coverage areas its own 5G network hasn’t reached.

Nine days ago I downloaded the Helium Mobile app onto my iPhone 12 and set up an account with an eSIM and a new phone number, alongside my phone number of many years now in a Verizon account using a physical SIM card.  

My iPhone has been earning MOBILE tokens by sharing its location to allow better mapping of the Helium G5 network.  As I am writing this, the app has earned 3,346 Mobile tokens that could be sold and converted to $14.32 at this moment (the price of MOBILE, like other cryptocurrencies, is very volatile).

If this earning rate continues (a big ‘if’), the cellular account I am paying $20/month for will be generating MOBILE tokens each month worth ~$45. The $20 monthly cell phone plan charge can be paid with MOBILE tokens, leaving $15/month passive income from my subscribing to Helium Mobile and allowing anonymous tracking of my phone as I move about.  (Apple sends a message every three days asking if I am sure I want to be allowing continuous tracking by this one App.)

So there you have it.  Any cautionary notes from techie readers about the cybersecurity implications of what I am doing would be welcome.  

Wednesday, March 13, 2024

MindBlog becomes a 5G cellular hotspot in the the low-priced ‘People’s Cell Phone Network’ - Helium Mobile

I am writing this post, as is frequently the case, for myself to be able to look up in the future, as well as for MindBlog techie readers who might stumble across it. It describes my setup of a G5 hotspot in the new Helium G5 Mobile network. A post following this one will describe my becoming a user of this new cell phone network by putting the Helium Mobile App on my iPhone using an eSIM.

This becomes my third post describing my involvement in the part of the crypto movement seeking to 'return power to the people.' It attempts to bypass the large corporations that are the current gate keepers and regulators of commerce and communications, and who are able to assert controls that are more in their own self interests and profits more than the public good. 

The two previous posts (here and here) describe my being seduced into crypto-world  by my son's having made a six hundred-fold return on investment by virtue of being one of the first cohort (during the "genesis" period) to put little black boxes and antennas on their window sills earning HNT (Helium blockchain tokens) using  LoRa 868 MHz antennas transmitting and receiving in the 'Internet of Things." I was a latecomer, and in the 22 months since June of 2022 have earned ~$200 on an investment of ~$500 of equipment. 

Helium next came up with the idea of setting up its own 5G cell phone network, called Helium Mobile. Individual Helium 5G Hotspots (small cell phone antennas) use Citizens Broadband Radio Service (CBRS) Radios to provide cellular coverage like that provided by telecom companies' more expensive networks of towers (CBRS is a wide broadcast 3.5Ghz band in the United States that does not require a spectrum license for use.)

In July of 2022, I decided to set up the Helium G5 hot spot equipment shown in the picture below to be in the genesis period for the establishment of this new Helium G5 cellular network.  I made my Abyssinian cat named Martin shown in front of the Bobber 500 miner the system administrator. The G5 antenna seen on the sill in the middle of window views ~170 degree of the southern sky. 

This system cost ~$2,500 and by early March 2024 has earned ~4.3 Million MOBILE tokens worth ~$18,000. As in a Ponzi scheme, most of the rewards are from the Genesis period, March 2024 earnings are ~ $45/week.  If this rate of earnings persists, this represents an annual ROI (return on investment)l of ~ 100%

The writing above is mine, and I decided just for the heck of it to recruit the assistance of several GPT engines (Google Gemini, Open AI ChatGPT-4, Anthropic Claude, and Microsoft Bing)  with the following prompt: 

Could you please act as Deric Bownds writing a post for his Blog, Deric’s MindBlog at Describe my setting up of a Helium G5 hot spot using the Bobber 500 miner with FreedomFi CBRS and an internal antenna during the genesis period for the establishment of the new Helium G5 cellular network. 

The responses of Gemini and Bing were excited, poetic, lyrical, overwrought hyperbole, very much in the spirit of the cyber-world pitches one sees online. ChatGPT-4 (the best response) and Claude did toss up some interesting paragraphs, and below I pass on some of ChatGPT-4's slightly edited and redundant text. I would be more sanguine about all of hype over 'democratic participation (or at least my participation)' if I had not been completely flummoxed by the cacophony of  the Helium Discord Discord Server, YouTube videos, and the complexity of the HIP (Helium Improvement Protocol) votes that ratify new changes and improvements to the system.  

The Helium Network, known for pioneering decentralized wireless infrastructure, has embarked on an ambitious journey to extend its reach into 5G, leveraging the power of blockchain technology to create a community-powered network. The initiative is ambitious, aiming to democratize cellular connectivity, making it more accessible and affordable. This endeavor is not just about technology; it's about reimagining the fabric of our digital society.
This genesis period of the Helium G5 network represented a fascinating intersection of technology, psychology, and community. The network's decentralized nature challenges traditional models of infrastructure ownership and control, proposing a more democratic and participatory framework. It prompts us to reflect on our relationship with technology: Are we mere consumers, or can we also be contributors and custodians of the digital landscapes we inhabit?
As more individuals and communities join this initiative, setting up their hotspots and expanding the network, a unique form of collective intelligence emerges. This network, powered by the people, for the people, embodies a new kind of digital ecosystem—one that values collaboration over competition, and shared success over isolated gains.
The implications for our cognitive landscapes are profound. By democratizing access to 5G connectivity, we are not just enhancing our ability to communicate; we are potentially altering the way we think, learn, and engage with the world around us. The Helium G5 network invites us to imagine a future where connectivity is as natural and essential as the air we breathe, a future where our digital and physical realities are seamlessly intertwined.
The technology we embrace today shapes the world we live in tomorrow. Through initiatives like the Helium G5 network, we have the opportunity to craft a digital environment that reflects our highest aspirations for society—one that is open, inclusive, and fundamentally human.
In this journey of connectivity, we are all contributors, shaping the digital ethos of our era. The genesis of the Helium G5 network is not just a technical milestone; it is a call to action, inviting us to rethink our role in the digital age and to contribute to a network that embodies our collective values and aspirations.

The next post describes my becoming a customer of the new Helium Mobile cellular service plan costing $20/month.


Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Friday, March 08, 2024

Explaining the evolution of gossip

 A fascinating open source article from Pan et al.:

From Mesopotamian cities to industrialized nations, gossip has been at the center of bonding human groups. Yet the evolution of gossip remains a puzzle. The current article argues that gossip evolves because its dissemination of individuals’ reputations induces individuals to cooperate with those who gossip. As a result, gossipers proliferate as well as sustain the reputation system and cooperation.
Gossip, the exchange of personal information about absent third parties, is ubiquitous in human societies. However, the evolution of gossip remains a puzzle. The current article proposes an evolutionary cycle of gossip and uses an agent-based evolutionary game-theoretic model to assess it. We argue that the evolution of gossip is the joint consequence of its reputation dissemination and selfishness deterrence functions. Specifically, the dissemination of information about individuals’ reputations leads more individuals to condition their behavior on others’ reputations. This induces individuals to behave more cooperatively toward gossipers in order to improve their reputations. As a result, gossiping has an evolutionary advantage that leads to its proliferation. The evolution of gossip further facilitates these two functions of gossip and sustains the evolutionary cycle.

Wednesday, March 06, 2024

Deep learning models reveal sex differences in human functional brain organization that are replicable, generalizable, and behaviorally relevant

Ryali et al. do a massive analysis that argues strongly against the notion of a continuum in male-female brain organization, and underscore the crucial role of sex as a biological determinant in human brain organization and behavior.  I pass on the significance and abstract statements. Motivated readers can obtain a PDF of the article from me.  


Sex is an important biological factor that influences human behavior, impacting brain function and the manifestation of psychiatric and neurological disorders. However, previous research on how brain organization differs between males and females has been inconclusive. Leveraging recent advances in artificial intelligence and large multicohort fMRI (functional MRI) datasets, we identify highly replicable, generalizable, and behaviorally relevant sex differences in human functional brain organization localized to the default mode network, striatum, and limbic network. Our findings advance the understanding of sex-related differences in brain function and behavior. More generally, our approach provides AI–based tools for probing robust, generalizable, and interpretable neurobiological measures of sex differences in psychiatric and neurological disorders.
Sex plays a crucial role in human brain development, aging, and the manifestation of psychiatric and neurological disorders. However, our understanding of sex differences in human functional brain organization and their behavioral consequences has been hindered by inconsistent findings and a lack of replication. Here, we address these challenges using a spatiotemporal deep neural network (stDNN) model to uncover latent functional brain dynamics that distinguish male and female brains. Our stDNN model accurately differentiated male and female brains, demonstrating consistently high cross-validation accuracy (>90%), replicability, and generalizability across multisession data from the same individuals and three independent cohorts (N ~ 1,500 young adults aged 20 to 35). Explainable AI (XAI) analysis revealed that brain features associated with the default mode network, striatum, and limbic network consistently exhibited significant sex differences (effect sizes > 1.5) across sessions and independent cohorts. Furthermore, XAI-derived brain features accurately predicted sex-specific cognitive profiles, a finding that was also independently replicated. Our results demonstrate that sex differences in functional brain dynamics are not only highly replicable and generalizable but also behaviorally relevant, challenging the notion of a continuum in male-female brain organization. Our findings underscore the crucial role of sex as a biological determinant in human brain organization, have significant implications for developing personalized sex-specific biomarkers in psychiatric and neurological disorders, and provide innovative AI-based computational tools for future research.

Monday, March 04, 2024

Brains creating stories of selves: the neural basis of autobiographical reasoning

We all create our experienced selves from autobiographical reasoning based on remembered events in stories from our lives. In the journal Social Cognitive and Affective Neuroscience  D'Argembeau et al.  (open source) do an interesting fMRI study observing brain areas that are active during this process that are not recruited by more simple factual recall of the events.

...A few days before the scanning session, participants selected a set of memories that have been important in developing and sustaining their sense of self and identity (self-defining memories). During scanning, we instructed participants to approach each of their memories in two different ways: in some trials, they had to remember the concrete content of the event in order to mentally re-experience the situation in its original context (autobiographical remembering), whereas in other trials they were asked to reflect on the broader meaning and implications of their memory (autobiographical reasoning). Contrasting the neural activity associated with these two ways of approaching the same self-defining memories allowed us to identify the brain regions specifically involved in the autobiographical reasoning process.

The text of the article notes the functions of the brain areas mentioned in the article abstract (below) and has a nice graphic depiction of areas that were more active during autobiographical remembering compared with autobiographical reasoning versus areas that were more active during autobiographical reasoning compared with autobiographical remembering.  Here is the abstract:

Personal identity critically depends on the creation of stories about the self and one’s life. The present study investigates the neural substrates of autobiographical reasoning, a process central to the construction of such narratives. During functional magnetic resonance imaging scanning, participants approached a set of personally significant memories in two different ways: in some trials, they remembered the concrete content of the events (autobiographical remembering), whereas in other trials they reflected on the broader meaning and implications of their memories (autobiographical reasoning). Relative to remembering, autobiographical reasoning recruited a left-lateralized network involved in conceptual processing [including the dorsal medial prefrontal cortex (MPFC), inferior frontal gyrus, middle temporal gyrus and angular gyrus]. The ventral MPFC—an area that may function to generate personal/affective meaning—was not consistently engaged during autobiographical reasoning across participants but, interestingly, the activity of this region was modulated by individual differences in interest and willingness to engage in self-reflection. These findings support the notion that autobiographical reasoning and the construction of personal narratives go beyond mere remembering in that they require deriving meaning and value from past experiences.

Friday, March 01, 2024

The Hidden History of Debt

I pass on this link from the latest Human Bridges newsletter, and would encourage readers to subscribe to and support the Observatory's Human Bridges project, which is part of the Independent Media Institute:

Recent scientific findings and research in the study of human origins and our biology, paleoanthropology, and primate research have reached a key threshold: we are increasingly able to trace the outlines and fill in the blanks of our evolutionary story that began 7 million years ago to the present, and understand the social and cultural processes that produced the world we live in now.