Monday, March 18, 2024

The Physics of Non-duality

 I want to pass on this lucid posting by "Sean L" of Boston to the community.wakingup.com site:

The physics of nonduality

In this context I mean “nonduality” as it refers to there being no real subject-object separation or duality. One of the implications of physics that originally led me to investigate notions of “awakening” and “no-self” is the idea that there aren’t really any separate objects. We can form very self-consistent and useful concepts of objects (a car, an atom, a self, a city), but from the perspective of the universe itself such objects don’t actually exist as well-defined independent “things.” All that’s real is the universe as one giant, self-interacting, dynamic, but ultimately singular “thing.” If you try to partition off one part of the universe (a self, a galaxy) from the rest, you’ll find that you can’t actually do so in a physically meaningful way (and certainly not one that persists over time). All parts of the universe constantly interact with their local environment, exchanging mass and energy. Objectively, physics says that all points in spacetime are characterized by values of different types of fields and that’s all there is to it. Analogy: you might see this word -->self<-- on your computer screen and think of it as an object, but really it’s just a pattern of independent adjacent pixel values that you’re mapping to a concept. There is no objectively physically real “thing” that is the word self (just a well defined and useful concept). 
 

This is akin to the idea that few if any of the cells making up your body now are the same as when you were younger. Or the idea that the exact pattern you consider to be “you” in this moment will not be numerically identical to what you consider “you” in the next picosecond. Or the idea that there is nothing that I could draw a closed physical boundary around that perfectly encloses the essence of “you” such that excluding anything inside that boundary means it would no longer contain “you” and including anything currently outside the boundary would mean it contains more than just “you.” This is true even if you try to draw the boundary around only your brain or just specific parts of your brain. I think this is a fun philosophical idea, but also one that often gets the response of “ok, yeah, sure I guess that’s all logically consistent,” but then still feels esoteric. It often feels like it’s just semantics, or splitting hairs, or somehow not something that really threatens the idea of identity or of a physical self. 
 

I was recently discussing with another WakingUp user that what made this notion far more visceral and convincing to me (enough to motivate me to go out in search of what “I” actually am, which has ultimately led me here) was realizing that even the very idea of trying to draw a boundary around a “thing” is objectively meaningless. So, I thought I’d share what I mean by that in case others find it interesting too :) !

Here are four pictures. The two on the left are pictures of very simple boundaries of varying thickness that one might try to draw around a singular “thing” (perhaps a self?) to demonstrate that it is indeed a well defined object. The two pictures on the right are of the exact same “boundaries” as on the left, but as they would be seen by a creature that evolved to perceive reality in the momentum basis. I’ll better explain what that means in a moment, but the key point is that the pictures on the left and right are (as far as physics or objective reality is concerned) exactly equivalent representations of the same part of reality. Both sets of pictures are perceptual “pointers” to the same part of the universe. You literally cannot say that one is a more veridical or more accurate depiction of reality than the other, because they are equivalent mathematical descriptions of the same underlying objective structure. Humans just happen to have a bias toward the left images.
 

So then… what could these “boundaries” be enclosing in the pictures on the right? I sure can’t tell. Nor do I think it even makes sense to ask the question! Our sense that there are discrete “objects” in the universe (including selves) seems intuitive when perceiving the universe as shown on the left (as we do). But when perceiving the exact same reality as shown on the right I find this belief very quickly breaks down. There simply is no singular, bounded, contained “thing” on the right. Anything that might at first appear on the left to be a separable object will be completely mixed up with and inseparable from its “surroundings” when viewed on the right, and vice-versa. The boundary itself clearly isn’t even a boundary. Boundaries are (very useful!) concepts, but they have no ultimate objective physical meaning.

----------------------

Some technical details for those interested (ignore this unless interested):

You can think of a basis like a fancy coordinate system. Analogy: I can define the positions of all the billiard balls on a pool table by defining an XY coordinate system on the table and listing the numerical coordinates of each ball. But if I stretch and/or rotate that coordinate system then all the numbers representing those same positions of the balls will change. The balls themselves haven’t changed positions, but my coordinate system-dependent “perception” of the balls is totally different. They're different ways of perceiving the same fundamental structure (billiard ball positions), even though that structure itself exists independently of any coordinate system. The left and right images are analogous to different coordinate systems, but in a more fundamental way.

 

In quantum mechanics the correct description of reality is the wave function. For an entangled system of particles you don’t have a separate wave function for each particle. Instead, you have one multi-particle wave function for the whole system (strictly speaking one could argue the entire universe is most correctly described as a single giant and stupidly complicated wave function). There is no way to break up this wave function into separate single-particle wave functions (that’s what it means for the particles to be entangled). What that means is from the perspective of true reality, there literally aren’t separate distinct particles. That’s not just an interpretation – it’s a testable (falsifiable) statement of physical reality and one that has been extensively verified experimentally. So, if you think of yourself as a distinct object, or at least as a dynamical evolving system of interacting (but themselves distinct) particles, sorry, but that’s simply not how we understand reality to actually work :P .
 

However, to do anything useful we have to write down the wave function (e.g. so we can do calculations with it). We have to represent it mathematically. This requires choosing a basis in which to write it down, much like choosing a coordinate system with which to be able to write down the numerical positions of the billiard balls. A human-intuitive basis is the position basis, which is what’s shown in the left images. However, a completely equivalent way to write down the same wave function is in the momentum basis, which is what’s shown in the right images. There also exist many (really, infinite) other possible bases. Some bases will be more convenient than others depending on the type of calculation you’re trying to do. Ultimately, all bases are arbitrary and none are objectively real, because the universe doesn’t need to “write down” a wave function to compute it. The universe just is. To me, the equivalent representation of the same underlying reality in an infinite diversity of possible Hilbert Spaces (i.e. using different bases) much more viscerally drives home the point that there really are no separate objects (including selves). That’s not just philosophy! There’s just one objective reality (one thing, no duality) that can be perceived in an infinite variety of ways, each with different pros and cons. And our way of perceiving reality lends itself to concepts of separate things and objects.

 

There are other parts of physics I didn’t get into here that I think demonstrate that the true nature of the universe must be nondual (maybe to be discussed later). For example, the lack of room for free will or the indistinguishability of particles. If you actually read this whole post, thanks for your time and attention, and I hope you found it as interesting as I do!

Thursday, March 14, 2024

An inexpensive Helium Mobile 5G cellphone plan that pays you to use it?

This is a followup to the previous post describing my setting up a G5 hotspot on Helium’s decentralized 5G infrastructure that earns MOBILE tokens. The cash value of the MOBILE tokens earned since July 2022 is  ~7X the cost of the equipment needed to generate them.

Now I want to put down further facts I want to document for my future self and MindBlog’s techie readers.

Recently Helium has introduced Helium Mobile, a cell phone plan using using this new 5G infrastructure which costs $20/month - much less expensive than other cellular providers like Verizon and ATT.  It has partnered with T-Mobile to fill in coverage areas its own 5G network hasn’t reached.

Nine days ago I downloaded the Helium Mobile app onto my iPhone 12 and set up an account with an eSIM and a new phone number, alongside my phone number of many years now in a Verizon account using a physical SIM card.  

My iPhone has been earning MOBILE tokens by sharing its location to allow better mapping of the Helium G5 network.  As I am writing this, the app has earned 3,346 Mobile tokens that could be sold and converted to $14.32 at this moment (the price of MOBILE, like other cryptocurrencies, is very volatile).

If this earning rate continues (a big ‘if’), the cellular account I am paying $20/month for will be generating MOBILE tokens each month worth ~$45. The $20 monthly cell phone plan charge can be paid with MOBILE tokens, leaving $15/month passive income from my subscribing to Helium Mobile and allowing anonymous tracking of my phone as I move about.  (Apple sends a message every three days asking if I am sure I want to be allowing continuous tracking by this one App.)

So there you have it.  Any cautionary notes from techie readers about the cybersecurity implications of what I am doing would be welcome.  
 

Wednesday, March 13, 2024

MindBlog becomes a G5 cellular hotspot in the the low-priced ‘People’s Cell Phone Network’ - Helium Mobile

I am writing this post, as is frequently the case, for myself to be able to look up in the future, as well as for MindBlog techie readers who might stumble across it. It describes my setup of a G5 hotspot in the new Helium G5 Mobile network. A post following this one will describe my becoming a user of this new cell phone network by putting the Helium Mobile App on my iPhone using an eSIM.

This becomes my third post describing my involvement in the part of the crypto movement seeking to 'return power to the people.' It attempts to bypass the large corporations that are the current gate keepers and regulators of commerce and communications, and who are able to assert controls that are more in their own self interests and profits more than the public good. 

The two previous posts (here and here) describe my being seduced into crypto-world  by my son's having made a six hundred-fold return on investment by virtue of being one of the first cohort (during the "genesis" period) to put little black boxes and antennas on their window sills earning HNT (Helium blockchain tokens) using  LoRa 868 MHz antennas transmitting and receiving in the 'Internet of Things." I was a latecomer, and in the 22 months since June of 2022 have earned ~$200 on an investment of ~$500 of equipment. 

Helium next came up with the idea of setting up its own 5G cell phone network, called Helium Mobile. Individual Helium 5G Hotspots (small cell phone antennas) use Citizens Broadband Radio Service (CBRS) Radios to provide cellular coverage like that provided by telecom companies' more expensive networks of towers (CBRS is a wide broadcast 3.5Ghz band in the United States that does not require a spectrum license for use.)

In July of 2022, I decided to set up the Helium G5 hot spot equipment shown in the picture below to be in the genesis period for the establishment of this new Helium G5 cellular network.  I made my Abyssinian cat named Martin shown in front of the Bobber 500 miner the system administrator. The G5 antenna seen on the sill in the middle of window views ~170 degree of the southern sky. 

This system cost ~$2,500 and by early March 2024 has earned ~4.3 Million MOBILE tokens worth ~$18,000. As in a Ponzi scheme, most of the rewards are from the Genesis period, March 2024 earnings are ~ $45/week.  If this rate of earnings persists, this represents an annual ROI (return on investment)l of ~ 100%

The writing above is mine, and I decided just for the heck of it to recruit the assistance of several GPT engines (Google Gemini, Open AI ChatGPT-4, Anthropic Claude, and Microsoft Bing)  with the following prompt: 

Could you please act as Deric Bownds writing a post for his Blog, Deric’s MindBlog at http://mindblog.dericbownds.net. Describe my setting up of a Helium G5 hot spot using the Bobber 500 miner with FreedomFi CBRS and an internal antenna during the genesis period for the establishment of the new Helium G5 cellular network. 

The responses of Gemini and Bing were excited, poetic, lyrical, overwrought hyperbole, very much in the spirit of the cyber-world pitches one sees online. ChatGPT-4 (the best response) and Claude did toss up some interesting paragraphs, and below I pass on some of ChatGPT-4's slightly edited and redundant text. I would be more sanguine about all of hype over 'democratic participation (or at least my participation)' if I had not been completely flummoxed by the cacophony of  the Helium Discord Discord Server, YouTube videos, and the complexity of the HIP (Helium Improvement Protocol) votes that ratify new changes and improvements to the system.  

The Helium Network, known for pioneering decentralized wireless infrastructure, has embarked on an ambitious journey to extend its reach into 5G, leveraging the power of blockchain technology to create a community-powered network. The initiative is ambitious, aiming to democratize cellular connectivity, making it more accessible and affordable. This endeavor is not just about technology; it's about reimagining the fabric of our digital society.
This genesis period of the Helium G5 network represented a fascinating intersection of technology, psychology, and community. The network's decentralized nature challenges traditional models of infrastructure ownership and control, proposing a more democratic and participatory framework. It prompts us to reflect on our relationship with technology: Are we mere consumers, or can we also be contributors and custodians of the digital landscapes we inhabit?
As more individuals and communities join this initiative, setting up their hotspots and expanding the network, a unique form of collective intelligence emerges. This network, powered by the people, for the people, embodies a new kind of digital ecosystem—one that values collaboration over competition, and shared success over isolated gains.
The implications for our cognitive landscapes are profound. By democratizing access to 5G connectivity, we are not just enhancing our ability to communicate; we are potentially altering the way we think, learn, and engage with the world around us. The Helium G5 network invites us to imagine a future where connectivity is as natural and essential as the air we breathe, a future where our digital and physical realities are seamlessly intertwined.
The technology we embrace today shapes the world we live in tomorrow. Through initiatives like the Helium G5 network, we have the opportunity to craft a digital environment that reflects our highest aspirations for society—one that is open, inclusive, and fundamentally human.
In this journey of connectivity, we are all contributors, shaping the digital ethos of our era. The genesis of the Helium G5 network is not just a technical milestone; it is a call to action, inviting us to rethink our role in the digital age and to contribute to a network that embodies our collective values and aspirations.

The next post describes my becoming a customer of the new Helium Mobile cellular service plan costing $20/month.


 

Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Friday, March 08, 2024

Explaining the evolution of gossip

 A fascinating open source article from Pan et al.:

Significance
From Mesopotamian cities to industrialized nations, gossip has been at the center of bonding human groups. Yet the evolution of gossip remains a puzzle. The current article argues that gossip evolves because its dissemination of individuals’ reputations induces individuals to cooperate with those who gossip. As a result, gossipers proliferate as well as sustain the reputation system and cooperation.
Abstract
Gossip, the exchange of personal information about absent third parties, is ubiquitous in human societies. However, the evolution of gossip remains a puzzle. The current article proposes an evolutionary cycle of gossip and uses an agent-based evolutionary game-theoretic model to assess it. We argue that the evolution of gossip is the joint consequence of its reputation dissemination and selfishness deterrence functions. Specifically, the dissemination of information about individuals’ reputations leads more individuals to condition their behavior on others’ reputations. This induces individuals to behave more cooperatively toward gossipers in order to improve their reputations. As a result, gossiping has an evolutionary advantage that leads to its proliferation. The evolution of gossip further facilitates these two functions of gossip and sustains the evolutionary cycle.

Wednesday, March 06, 2024

Deep learning models reveal sex differences in human functional brain organization that are replicable, generalizable, and behaviorally relevant

Ryali et al. do a massive analysis that argues strongly against the notion of a continuum in male-female brain organization, and underscore the crucial role of sex as a biological determinant in human brain organization and behavior.  I pass on the significance and abstract statements. Motivated readers can obtain a PDF of the article from me.  

Significance

Sex is an important biological factor that influences human behavior, impacting brain function and the manifestation of psychiatric and neurological disorders. However, previous research on how brain organization differs between males and females has been inconclusive. Leveraging recent advances in artificial intelligence and large multicohort fMRI (functional MRI) datasets, we identify highly replicable, generalizable, and behaviorally relevant sex differences in human functional brain organization localized to the default mode network, striatum, and limbic network. Our findings advance the understanding of sex-related differences in brain function and behavior. More generally, our approach provides AI–based tools for probing robust, generalizable, and interpretable neurobiological measures of sex differences in psychiatric and neurological disorders.
Abstract
Sex plays a crucial role in human brain development, aging, and the manifestation of psychiatric and neurological disorders. However, our understanding of sex differences in human functional brain organization and their behavioral consequences has been hindered by inconsistent findings and a lack of replication. Here, we address these challenges using a spatiotemporal deep neural network (stDNN) model to uncover latent functional brain dynamics that distinguish male and female brains. Our stDNN model accurately differentiated male and female brains, demonstrating consistently high cross-validation accuracy (>90%), replicability, and generalizability across multisession data from the same individuals and three independent cohorts (N ~ 1,500 young adults aged 20 to 35). Explainable AI (XAI) analysis revealed that brain features associated with the default mode network, striatum, and limbic network consistently exhibited significant sex differences (effect sizes > 1.5) across sessions and independent cohorts. Furthermore, XAI-derived brain features accurately predicted sex-specific cognitive profiles, a finding that was also independently replicated. Our results demonstrate that sex differences in functional brain dynamics are not only highly replicable and generalizable but also behaviorally relevant, challenging the notion of a continuum in male-female brain organization. Our findings underscore the crucial role of sex as a biological determinant in human brain organization, have significant implications for developing personalized sex-specific biomarkers in psychiatric and neurological disorders, and provide innovative AI-based computational tools for future research.

Monday, March 04, 2024

Brains creating stories of selves: the neural basis of autobiographical reasoning

We all create our experienced selves from autobiographical reasoning based on remembered events in stories from our lives. In the journal Social Cognitive and Affective Neuroscience  D'Argembeau et al.  (open source) do an interesting fMRI study observing brain areas that are active during this process that are not recruited by more simple factual recall of the events.

...A few days before the scanning session, participants selected a set of memories that have been important in developing and sustaining their sense of self and identity (self-defining memories). During scanning, we instructed participants to approach each of their memories in two different ways: in some trials, they had to remember the concrete content of the event in order to mentally re-experience the situation in its original context (autobiographical remembering), whereas in other trials they were asked to reflect on the broader meaning and implications of their memory (autobiographical reasoning). Contrasting the neural activity associated with these two ways of approaching the same self-defining memories allowed us to identify the brain regions specifically involved in the autobiographical reasoning process.

The text of the article notes the functions of the brain areas mentioned in the article abstract (below) and has a nice graphic depiction of areas that were more active during autobiographical remembering compared with autobiographical reasoning versus areas that were more active during autobiographical reasoning compared with autobiographical remembering.  Here is the abstract:

Personal identity critically depends on the creation of stories about the self and one’s life. The present study investigates the neural substrates of autobiographical reasoning, a process central to the construction of such narratives. During functional magnetic resonance imaging scanning, participants approached a set of personally significant memories in two different ways: in some trials, they remembered the concrete content of the events (autobiographical remembering), whereas in other trials they reflected on the broader meaning and implications of their memories (autobiographical reasoning). Relative to remembering, autobiographical reasoning recruited a left-lateralized network involved in conceptual processing [including the dorsal medial prefrontal cortex (MPFC), inferior frontal gyrus, middle temporal gyrus and angular gyrus]. The ventral MPFC—an area that may function to generate personal/affective meaning—was not consistently engaged during autobiographical reasoning across participants but, interestingly, the activity of this region was modulated by individual differences in interest and willingness to engage in self-reflection. These findings support the notion that autobiographical reasoning and the construction of personal narratives go beyond mere remembering in that they require deriving meaning and value from past experiences.


Friday, March 01, 2024

The Hidden History of Debt

I pass on this link from the latest Human Bridges newsletter, and would encourage readers to subscribe to and support the Observatory's Human Bridges project, which is part of the Independent Media Institute:

Recent scientific findings and research in the study of human origins and our biology, paleoanthropology, and primate research have reached a key threshold: we are increasingly able to trace the outlines and fill in the blanks of our evolutionary story that began 7 million years ago to the present, and understand the social and cultural processes that produced the world we live in now.

Wednesday, February 28, 2024

What Is a Society?: The Importance of Building an Interdisciplinary Perspective

I'm passing on the abstract I just received of a forthcoming article in Behavioral and Brain Sciences that I am starting to have a look through. Motivated readers can obtain a PDF of the article by emailing me.
Abstract: I submit the need to establish a comparative study of societies, namely groups beyond a simple, immediate family that have the potential to endure for generations, whose constituent individuals recognize one another as members, and that maintain control over access to a physical space. This definition, with refinements and ramifications I explore, serves for cross-disciplinary research since it applies not just to nations but to diverse hunter-gatherer and tribal groups with a pedigree that likely traces back to the societies of our common ancestor with the chimpanzees. It also applies to groups among other species for which comparison to humans can be instructive. Notably, it describes societies in terms of shared group identification rather than social interactions. An expansive treatment of the topic is overdue given that the concept of a society (even the use of such synonyms as primate "troop") has fallen out of favor among biologists, resulting in a semantic mess; while sociologists rarely consider societies beyond nations, and social psychologists predominantly focus on ethnicities and other component groups of societies. I examine the relevance of societies across realms of inquiry, discussing the ways member recognition is achieved; how societies compare to other organizational tiers; and their permeability, territoriality, relation to social networks and kinship, and impermanence. We have diverged from our ancestors in generating numerous affiliations within and between societies while straining the expectation of society memberships by assimilating diverse populations. Nevertheless, if, as I propose, societies were the first, and thereafter the primary, groups of prehistory, how we came to register society boundaries may be foundational to all human "groupiness." A discipline-spanning approach to societies should further our understanding of what keeps societies together and what tear them apart.

Monday, February 26, 2024

The "enjunkification" of our online lives

I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article. 

Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”

The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.

...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.

The second article I mention is Jon Caramanica's:  "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:

The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.


 

 

Friday, February 23, 2024

Using caffeine to induce flow states

I pass on this link to an article in Neuroscience & Biobehavioral Reviews (open access) and show the Highlights and Abstract of the article below.  One of the coauthors, Steven Kotler, who is executive director at the "Flow Research Collective" was mentioned in my previous 2019 Mind Blog post "A Schism in Flow-land? Flow Genome Project vs. Flow Research Collective."  It was the last in a series of critical posts that started in 2017. While I agree from my personal experience that caffeine (as well other common stimulants) can induce more immersion in and focus on a task, I find find the text, which has a bloated unselective bibliography, to be mind-numbing gibble-gabble, just as were the writing efforts I was reviewing in 2017-2019,  However,  the authors do offer a recitation that some will find useful of "psychological and biological effects of caffeine that, conceptually, enhance flow" - whatever that means.

Highlights

-Caffeine promotes motivation (‘wanting’) and lowers effort aversion, thus facilitating flow.
-Caffeine boosts flow by increasing parasympathetic high frequency heart rate variability.
-Striatal endocannabinoid modulation by caffeine improves stress tolerance and flow.
-Chronic caffeine alters network activity, resulting in greater alertness and flow.
-Caffeine re-wires the dopamine reward system in ADHD for better attention and flow.

 Abstract

Flow is an intrinsically rewarding state characterised by positive affect and total task absorption. Because cognitive and physical performance are optimal in flow, chemical means to facilitate this state are appealing. Caffeine, a non-selective adenosine receptor antagonist, has been emphasized as a potential flow-inducer. Thus, we review the psychological and biological effects of caffeine that, conceptually, enhance flow. Caffeine may facilitate flow through various effects, including: i) upregulation of dopamine D1/D2 receptor affinity in reward-associated brain areas, leading to greater energetic arousal and ‘wanting’; ii) protection of dopaminergic neurons; iii) increases in norepinephrine release and alertness, which offset sleep-deprivation and hypoarousal; iv) heightening of parasympathetic high frequency heart rate variability, resulting in improved cortical stress appraisal, v) modification of striatal endocannabinoid-CB1 receptor-signalling, leading to enhanced stress tolerance; and vi) changes in brain network activity in favour of executive function and flow. We also discuss the application of caffeine to treat attention deficit hyperactivity disorder and caveats. We hope to inspire studies assessing the use of caffeine to induce flow.

Wednesday, February 21, 2024

AI makes our humanity matter more than ever.

I want to pass on this link to an NYTimes Opinion Guest essay by Aneesh Raman, a work force expert at LinkedIn,  and

Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”

The knowledge economy that we have lived in for decades emerged out of a goods economy that we lived in for millenniums, fueled by agriculture and manufacturing. Today the knowledge economy is giving way to a relationship economy, in which people skills and social abilities are going to become even more core to success than ever before. That possibility is not just cause for new thinking when it comes to work force training. It is also cause for greater imagination when it comes to what is possible for us as humans not simply as individuals and organizations but as a species.

Monday, February 19, 2024

Comparing how generative AI and living organisms generate meaning suggests future direction for AI development

I want to pass on this open source opinion article in Trends in Cognitive Sciences by Karl Friston, Andy Clark, and other prominent figures who study generative models of sentient behavior in living organisms.  (They suggest a future direction for AI development that is very similar to the vision described in the previous MindBlog post, which described a recent article by Venkatesh Rao.) Here are the highlights and abstract of the article.

Highlights

  • Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation.
  • We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems.
  • Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways.
  • Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded.
  • Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively.

Abstract

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

Saturday, February 17, 2024

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

I pass on the PDF of an article from the Gemini Team at Google. Here's the abstract describing a "working memory" system vastly greater than our own, that can hold 10 million tokens  (a 'token' is roughly 0.75 words):

In this report, we present the latest model of the Gemini family, Gemini 1.5 Pro, a highly compute-efficient multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5 Pro’s long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 2.1 (200k) and GPT-4 Turbo (128k). Finally, we highlight surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person learning from the same content.

Friday, February 16, 2024

An agent-based vision for scaling modern AI - Why current efforts are misguided.

I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:

…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.


…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching


What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights


…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.

Scaling Precedents from Biology

There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.

Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….

If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.

The Road to Muddledom

Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).

Muddling Doctrines

How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.

Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.

Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.

Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.

Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.


The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics

The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.

As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.

Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.

Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.

When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.

Beyond Training Determinism

…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.

…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.

Muddling Through vs. Godding Through

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.

To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling.

..in thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.

This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.

Protocols for Massed Muddling

How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)

When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).

While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.

How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.

Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.

…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.

…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.

This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.

This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.

Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.

…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

And then we can have a whole new culture war about whether that’s a good thing.

Wednesday, February 14, 2024

How long has humanity been at war with itself?

I would like to point MindBlog readers to an article by Deborah Barsky with the title of this post. The following clip provides relevant links to the Human Bridges project of the Independent Media Institute. 

Deborah Barsky is a writing fellow for the Human Bridges project of the Independent Media Institute, a researcher at the Catalan Institute of Human Paleoecology and Social Evolution, and an associate professor at the Rovira i Virgili University in Tarragona, Spain, with the Open University of Catalonia (UOC). She is the author of Human Prehistory: Exploring the Past to Understand the Future (Cambridge University Press, 2022).

Monday, February 12, 2024

The Art of Doing Nothing

Deep into the Juniper/Cedar pollen allergy season in Austin TX, I'm being frustrated that I have so little energy to do things. It's as if my batteries can not muster more than a 10% charge. I try to tell myself that it's OK to 'just be", to do nothing, and have not been very successful at this.  So, I enjoyed  stumbling upon a  recent Guardian article, "The art of doing nothing: have the Dutch found the answer to burnout culture?," whose URL I pass on to MindBlog readers.  It describes the concept of 'niken,' or the Dutch art of doing nothing. It has  has ameliorated my concern over how little I have been getting done, and references a 2019 NYTimes article, "The Case for Doing Nothing" that went viral when it was first published.  Letting go of always finding problems that need to be solved let's one face  a question posed by the meditation guru Loch Kelly: "What is there when there are no problems to be solved?" Variants of this question have been addressed in a thread of numerous MindBlog posts that I have now largely drawn to a close.

Friday, February 09, 2024

Bodily maps of musical sensations across cultures

Interesting work from Putkinen et al. (open source):  

Significance

Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.

Wednesday, February 07, 2024

Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment

I pass on to MindBlog readers the abstract of a recent Behavioral and Brain Science article by Sijilmassi et al. titled "‘Our Roots Run Deep’: Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment."  Motivated readers can obtain a PDF of the article from me. 

One of the most remarkable manifestations of social cohesion in large-scale entities is the belief in a shared, distinct and ancestral past. Human communities around the world take pride in their ancestral roots, commemorate their long history of shared experiences, and celebrate the distinctiveness of their historical trajectory. Why do humans put so much effort into celebrating a long-gone past? Integrating insights from evolutionary psychology, social psychology, evolutionary anthropology, political science, cultural history and political economy, we show that the cultural success of historical myths is driven by a specific adaptive challenge for humans: the need to recruit coalitional support to engage in large scale collective action and prevail in conflicts. By showcasing a long history of cooperation and shared experiences, these myths serve as super-stimuli, activating specific features of social cognition and drawing attention to cues of fitness interdependence. In this account, historical myths can spread within a population without requiring group-level selection, as long as individuals have a vested interest in their propagation and strong psychological motivations to create them. Finally, this framework explains, not only the design-features of historical myths, but also important patterns in their cross-cultural prevalence, inter-individual distribution, and particular content.

Monday, February 05, 2024

Functional human brain tissue produced by layering different neuronal types with 3D bioprinting

A very important advance by Su-Chun Zhang and collaborators at the University of Wisconsin that moves studies of nerve cells connecting in nutrient dishes from two to three dimensions:  

Highlights

  • Functional human neural tissues assembled by 3D bioprinting
  • Neural circuits formed between defined neural subtypes
  • Functional connections established between cortical-striatal tissues
  • Printed tissues for modeling neural network impairment

Summary

Probing how human neural networks operate is hindered by the lack of reliable human neural tissues amenable to the dynamic functional assessment of neural circuits. We developed a 3D bioprinting platform to assemble tissues with defined human neural cell types in a desired dimension using a commercial bioprinter. The printed neuronal progenitors differentiate into neurons and form functional neural circuits within and between tissue layers with specificity within weeks, evidenced by the cortical-to-striatal projection, spontaneous synaptic currents, and synaptic response to neuronal excitation. Printed astrocyte progenitors develop into mature astrocytes with elaborated processes and form functional neuron-astrocyte networks, indicated by calcium flux and glutamate uptake in response to neuronal excitation under physiological and pathological conditions. These designed human neural tissues will likely be useful for understanding the wiring of human neural networks, modeling pathological processes, and serving as platforms for drug testing.