Showing posts with label culture/politics. Show all posts
Showing posts with label culture/politics. Show all posts

Monday, February 26, 2024

The "enjunkification" of our online lives

I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article. 

Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”

The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.

...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.

The second article I mention is Jon Caramanica's:  "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:

The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.



Wednesday, February 21, 2024

AI makes our humanity matter more than ever.

I want to pass on this link to an NYTimes Opinion Guest essay by Aneesh Raman, a work force expert at LinkedIn,  and

Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”

The knowledge economy that we have lived in for decades emerged out of a goods economy that we lived in for millenniums, fueled by agriculture and manufacturing. Today the knowledge economy is giving way to a relationship economy, in which people skills and social abilities are going to become even more core to success than ever before. That possibility is not just cause for new thinking when it comes to work force training. It is also cause for greater imagination when it comes to what is possible for us as humans not simply as individuals and organizations but as a species.

Friday, February 16, 2024

An agent-based vision for scaling modern AI - Why current efforts are misguided.

I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:

…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.

…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching

What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights

…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.

Scaling Precedents from Biology

There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.

Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….

If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.

The Road to Muddledom

Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).

Muddling Doctrines

How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.

Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.

Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.

Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.

Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.

The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics

The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.

As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.

Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.

Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.

When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.

Beyond Training Determinism

…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.

…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.

Muddling Through vs. Godding Through

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.

To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling. thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.

This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.

Protocols for Massed Muddling

How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)

When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).

While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.

How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.

Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.

…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.

…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.

This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.

This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.

Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.

…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

And then we can have a whole new culture war about whether that’s a good thing.

Wednesday, February 14, 2024

How long has humanity been at war with itself?

I would like to point MindBlog readers to an article by Deborah Barsky with the title of this post. The following clip provides relevant links to the Human Bridges project of the Independent Media Institute. 

Deborah Barsky is a writing fellow for the Human Bridges project of the Independent Media Institute, a researcher at the Catalan Institute of Human Paleoecology and Social Evolution, and an associate professor at the Rovira i Virgili University in Tarragona, Spain, with the Open University of Catalonia (UOC). She is the author of Human Prehistory: Exploring the Past to Understand the Future (Cambridge University Press, 2022).

Wednesday, February 07, 2024

Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment

I pass on to MindBlog readers the abstract of a recent Behavioral and Brain Science article by Sijilmassi et al. titled "‘Our Roots Run Deep’: Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment."  Motivated readers can obtain a PDF of the article from me. 

One of the most remarkable manifestations of social cohesion in large-scale entities is the belief in a shared, distinct and ancestral past. Human communities around the world take pride in their ancestral roots, commemorate their long history of shared experiences, and celebrate the distinctiveness of their historical trajectory. Why do humans put so much effort into celebrating a long-gone past? Integrating insights from evolutionary psychology, social psychology, evolutionary anthropology, political science, cultural history and political economy, we show that the cultural success of historical myths is driven by a specific adaptive challenge for humans: the need to recruit coalitional support to engage in large scale collective action and prevail in conflicts. By showcasing a long history of cooperation and shared experiences, these myths serve as super-stimuli, activating specific features of social cognition and drawing attention to cues of fitness interdependence. In this account, historical myths can spread within a population without requiring group-level selection, as long as individuals have a vested interest in their propagation and strong psychological motivations to create them. Finally, this framework explains, not only the design-features of historical myths, but also important patterns in their cross-cultural prevalence, inter-individual distribution, and particular content.

Friday, February 02, 2024

Towards a Metaphysics of Worlds

I have a splitting headache from having just watched a 27 minute long YouTube rapid fire lecture by Venkatesh Rao, given last November at the Autonomous Worlds Assembly in Istanbul (part of DevConnect, a major Ethereum ecosystem event).  His latest newsletter “Towards a Metaphysics of Worlds” gives adds some notes and context, and gives a link  to its slides. As Rao notes:

“This may seem like a glimpse into a very obscure and nerdy subculture for many (most?) of you, but I think something very important and interesting is brewing in this scene and more people should know about it.”

I would suggest that you to skip the YouTube lecture and cherry pick your way through his slides.  Some are very simple and quite striking, clearly presenting interesting ideas about the epistomology, ontology, and definitions of worlds.  Here is Slide 11, where what Rao means by "Worlds" is made more clear:

Monday, January 29, 2024

Mind blog’s first 18 years - what next? A space for discussions among MindBlog readers?

This is post number 5,537 of Deric's MindBlog, which will soon be celebrating its 18th birthday.  I started this blog on Feb. 6, 2006, in the middle of the golden age of Blogging, with a post titled “Dangerous Ideas.”  In the late 2000s the rise of social media like Facebook, Twitter, and Instagram shifted audiences towards shorter more engaging posts, and after 2010 multimedia platforms like YouTube, Twitch, and TikTok became popular. Max Read laments the increasing effort required to deal with the info sphere as millennials have ‘aged out’ and members of generation Z have become more eager early adopters of  ChatGPT than their elders.  The current digital landscape emphasizes content monetization, influencer marketing, and multi-platform presence, but here still  remains a vital role for niche blogs such as this one, where readers can find specialized content beyond the mainstream social media noise. 

I recently received an email from a MindBlog reader in Germany who lamented that MindBlog received very few comments from readers. I have  received numerous emails over the years from silent but loyal readers who express gratitude for effort I put into the blog, but there have been only a few extended discussion threads, such as those of anti-aging compounds and life optimization snake oil .

My German reader made the interesting suggestion that I consider initiating a platform for direct interactions and deeper engagement among MindBlog readers, perhaps a live video and text platform that might include both experts and educated laypeople outside the scientific community. This would be relatively easy for me to set up if sufficient interest is shown, so I invite readers interested in this prospect to email me at

Sunday, January 21, 2024

Titles and URLs for key MindBlog posts on selves

I pass on a chronological list of titles and URLs of MindBlog posts assembled in preparation for a video chat with a European MindBlog reader:

An "Apostle's Creed" for the humanistic scientific materialist?

Some rambling on "Selves" and “Purpose”

Self, purpose, and tribal mentality as Darwinian adaptations (or…Why why aren’t we all enlightened?)

MindBlog passes on a note: on the relief of not being yourself

Points on having a self and free will.

I am not my problem

The non-duality industry as a panacea for the anxieties of our times?

Enlightenment, Habituation, and Renewal - Or, Mindfulness as the opiate of the thinking classes?

A quick MindBlog riff on what a self is….

MindBlog paragraphs bloviating on the nature of the self ask Google Bard and Chat GPT 4 for help

A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4

Constructing Self and World  

Anthropic Claude's version of my writing on the Mind - a condensation of my ideas  

A Materialist's Credo

How our genes support our illusory selves - the "Baldwin effect"




Monday, January 01, 2024

On shifting perspectives....

I pass on clips from a piece in the 12/202/23 Wall Street Journal by Carlo Rovelli, the author, most recently, of ‘ White Holes: Inside the Horizon’


By Johannes Kepler (1634)

1 Perhaps the greatest conceptual earthquake in the history of civilization was the Copernican Revolution. Prior to Copernicus, there were two realms: the celestial and the terrestrial. Celestial things orbit, terrestrial ones fall. The former are eternal, the latter perishable. Copernicus proposed a different organization of reality, in which the sun is in a class of its own. In another class are the planets, with the Earth being merely one among many. The moon is in yet another class, all by itself. Everything revolves around the sun, but the moon revolves around the Earth. This mad subversion of conventional reason was taken seriously only after Galileo and Kepler convinced humankind that Copernicus was indeed right. “Somnium” (“The Dream”) is the story of an Icelandic boy—Kepler’s alter ego—his witch mother and a daemon. The daemon takes the mother and son up to the moon to survey the universe, showing explicitly that what they usually see from Earth is the perspective from a moving body. Sheer genius.


By Elsa Morante (1974)

2 This passionate and intelligent novel is a fresco of Italy during World War II. “La Storia,” its title in Italian, can be translated as “story” or “tale” as well as “history.” Elsa Morante plumbs the complexity of humankind and its troubles, examining the sufferings caused by war. She writes from the view of the everyday people who bear the burden of the horror. This allows her to avoid taking sides and to see the humanity in both. The subtitle of this masterpiece—“a scandal that has lasted for ten thousand years”— captures Morante’s judgment of war, inviting us to a perspective shift on all wars.

Collected Poems of Lenore Kandel

By Lenore Kandel (2012)

3 Lenore Kandel was a wonderful and underrated poet who was part of the Beat-hippie movement in California. The tone of her poems varies widely, from bliss to desperation: “who finked on the angels / who stole the holy grail and hocked it for a jug of wine?” She created a scandal in the late 1960s by writing about sex in a strong, vivid way. Her profoundly anticonformist voice offers a radical shift of perspective by singing the beauty and the sacredness of female desire.

Why Empires Fall

By Peter Heather and John Rapley (2023)

4 As an Italian, I have long been intrigued by the fall of the Roman Empire. Peter Heather and John Rapley summarize the recent historiographic reassessments of the reasons for the fall. Their work also helps in understanding the present. Empires don’t necessarily collapse because they weaken. They fall because their success brings prosperity to a wider part of the world. They fall if they cannot adjust to the consequent rebalancing of power and if they try to stop history with the sheer power of weapons. “The easiest response to sell to home audiences still schooled in colonial history is confrontation,” the authors write. “This has major, potentially ruinous costs, compared to the more realistic but less immediately popular approach of accepting the inevitability of the periphery’s rise and trying to engage with it.”

The Mūlamadhyamakakārikā

By Nāgārjuna (ca. A.D. 150)

5 This major work of the ancient Indian Buddhist philosopher Nāgārjuna lives on in modern commentaries and translations. Among the best in English is Jay L. Garfield’s “The Fundamental Wisdom of the Middle Way” (1995). Nāgārjuna’s text was repeatedly recommended to me in relation to my work on the interpretation of quantum theory. I resisted, suspicious of facile and often silly juxtapositions between modern science and Eastern philosophy. Then I read it, and it blew my mind. It does indeed offer a possible philosophical underpinning to relational quantum mechanics, which I consider the best way to understand quantum phenomena. But it offers more: a dizzying and captivating philosophical perspective that renounces any foundation. According to this view, the only way to understand something is through its relation with something else—nothing by itself has an independent reality. In the language of Nāgārjuna, every thing, taken by itself, is “empty,” including emptiness itself. I find this a fascinating intellectual perspective as well as a source of serenity, with its acceptance of our limits and impermanence.



Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”


In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.


It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

Google's Bard's response:
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.


ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.


Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Sunday, November 19, 2023

AI is infusing our daily lives. Is it an existential threat?

As AI is being subtly melded into our daily lives by the Google, Apple, and Microsoft clouds there is continuing debate over whether it is or Isn’t an existential threat. This debate has been brought to a head by the sudden and unexpected firing of Open AI’s chief executive Sam Altman and has starkly exposed the philosophical rift in the AI community between those who are more or less concerned about the potential risks of its untrammeled development. How many, if any, guardrails should be in place?. I have been on the techno-optimists side of this debate and passed on Marc Andreesen's rather extreme and overstated SubStack post titled "Why AI Will Save the World" in a June 23 MindBlog post. I was then completely horrified by his subsequent piece by this ideologist of the Silicon Valley elite titled "Fighting", which took a very positive view of Elon Musk challenging Mark Zuckerbert to a cage fight. I was going to write a piece on his dystopian future world, but two NYTimes pieces have done a better job at it than I could. They are Elizabeth Spiers "A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World" and Ezra Klein's "The Chief Ideologist of the Silicon Valley Elite Has Some Strange Ideas".

Friday, November 10, 2023

Personalized chatbots and wearable AI devices for melding AI with our everyday lives

Personalized GPTs are here, but apparently not yet at the level of consumers like myself who don't want to do coding. I've been looking into making my own personal chatbot whose training data is 50 MB of my personal journal of about 20,000 pages. I would then like to query and chat with the resulting chatbot about events in my life since 1980. If anyone reading this has some advice, let me know!

Below are some URLs to my morning's reading:

These links are to articles from Kevin Roose, NYTimes tech columnist:  

Personalized A.I. Agents Are Here. Is the World Ready for Them?

Kevin Roose Hard Fork podcast (transcript available) on this and other AI topics 

And this article is on AI devices seeking to cut into smartphone dominance 

It points to two such devices: The Humane Ai Pin and The Rewind AI Pendant 

Friday, October 13, 2023

Blithe Spirits and Heavy Souls

This post is another abstracting/condensation of a recent Ventkatesh Rao essay with the title of this post (Its subtitle: ‘ Learning an experiential posture for the Permaweird.’) In this case, it is a 4-fold reduction in the words I need to scan through when I return to this MindBlog archive to recall the ideas.  

I’ve found some useful “unexpected validation for unexpressed thoughts the reader already holds’” in his contrasting of ‘insight porn’ with ‘presence prose,”  as well as his distinction of Blithe Spirits and Heary Souls to make the point that you don’t have to retreat from the world to feel good.

Rao doesn’t make it easy on readers. He assumes a thorough knowledge of contemporary and classical culture.  I suggest going to if  references to ‘fingerspitzengefühl’, ’Red Pill/Blue Pill’ , or ‘larps’ mystify you. Here’s my reduction:

…the kind of writing I am choosing to do is vaguely unsatisfying.. in “the endocrine circuitry and muscle memory around this behavior isn’t tuned right” way. The vibe is off. I’m “not feeling it.” It feels like more of a grind…writing in the kind of hard mode I’m in right now, when the fingerspitzengefühl doesn’t feel quite right, feels draining rather than energizing. It has the desired objective outcomes, but lacks the subjective payoff.

I think the problem isn’t with me, but the growing fogginess of the world. So let’s start by looking at why insight porn used to work, why it stopped working, why writing in the new mode feels less satisfying, and why that’s actually a good thing.

Insight porn works by covertly constructing satisfying models of the world starting with nuggets of unexpected validation for unexpressed thoughts the reader already holds…Insight porn rarely tells you something you don’t know. Instead, it gives you permission to consciously believe something you already unconsciously suspect to be true, and to drop the opposed belief you didn’t realize you were only pretending to hold. That’s why a sense of relief is a primary response to consuming good insight porn.

Insight porn at its most sophisticated was a delivery mechanism for pills, as in red pill/blue pill…With relatively few words, you could “pill” people in various ways, using oblique approaches against which they had few cognitive defenses, causing large sections of their world-views to collapse, generally in favor of some flavor of cynical anomie or radicalized fervor, and they’d thank you for it…with an array of pills, eroding and destabilizing the entire landscape of worldviews…

…it was actually a useful, perhaps even critical thing to be doing roughly between 2009-17, when serious problems were beginning to manifest across the civilizational stack. If mental models hadn’t been developing cracks and fissures along with the worlds they were supposed to be describing, we’d all have suffered much more mightily than we did through the culture wars and Great Weirding. Through the shadow war of mass, mutual pilling, we inoculated ourselves with a diverse array satirical skepticisms of the prevailing societal order. We became better prepared to inhabit the Permaweird we now find ourselves in.


The kind of writing I’ve been practicing … over the last few years is about trying to make myself, and the reader, feel present in the world. Let’s call it presence prose. …One visible sign of this is that I often talk about current events these days, which I rarely did ten years ago. I actively aimed for “timeless” a decade ago. Now, at least in this newsletter, I’m fine with transient relevance… presence prose is not news. Nor is it “analysis” in any legibly scoped and bounded way (analysis within a “niche” is another 2010s genre of writing that feels increasingly dated these days). It’s a kind of mental time travel into a mode of experiencing some leaky corner of the entangled world in real time.
To write for presence, you must begin by setting aside mental models and theories in favor of whatever flow of raw data you can find and consume…. Where possible, you must also travel closer to empirical and phenomenological sources, while being ever mindful of the fact that such close-in loci today also feature people consciously crafting livestreamed narrative theaters of validation rather than actually paying attention. Such people are part of the phenomenology you must make sense of. Elon Musk visiting the US border isn’t a citizen journalist looking into the immigrant crisis there. It is a bit of theater that is itself part of that crisis, which comes swaddled in multiple overlapping theaters of this sort.


The thing is, insight porn is simply more satisfying to produce and consume than presence prose. Writing or reading in that mode reliably produces the dopamine required to produce or consume more. While not quite entertainment — insight porn has utility, and often calls for demanding levels of effort on both sides — you don’t do it for the utility.

A diet of insight porn, I think, creates and sustains a certain light-hearted and carefree way of being present in the world. It turns you into a blithe spirit. Under the right conditions, it can make you worldy, yet carefree. You feel present in the world, but untroubled, if not untouched, by your experiences in it.

The right conditions are obvious: This diet works when the world is doing well enough, at least for you, that you don’t have to choose between being present and feeling good. When most of the things you could choose to think, write, or read about, or even directly experience, are pleasant. When you only have to add a calibrated amount of discomfort through insight porn to feel present, like a dash of pepper on bland food, there are no real tough choices to be made.

Through the 2000s and 2010s, even my most serious-minded and altruistic friends, working on difficult social problems that entailed a great deal of misery for other living beings (both human and animal) typically could find refuge in other areas of interest that were pleasant to think about. The subsisted on a diet of insight porn. The world had problems, and if you were in a position of privilege, you could choose to work on some of them, but it didn’t look like a dumpster fire in every direction. You didn’t have to put yourself through escape-artist contortions to indulge in a bit of restorative escapism. You didn’t have to build entire reactionary larps to inhabit.

You didn’t have to retreat from the world to feel good.

it does seem that regardless of ideology or how hopped up on pills you are, the dumpster-fire index has been growing higher for everybody. In 2023, it is still possible to hold on to a generally pollyannish worldview of all being for the best in this best of all possible worlds, but it takes significantly more cognitive effort than in 2013. Having a mansion helps, which is one reason I’m scheming to get one. Having a particular talent for a an absorbing pursuit of excellence in something like wood-working in a cabin in the woods helps as well.

What are the consequences of polyannish postures being more expensive and less attractive to sustain?

From my observations, I’d say most people switch modes. They go from being blithe spirits to being heavy souls. People who operate with a default mindset of a sense of a world on fire, a default joyless world where ease and pleasure do not come naturally,

Most people, I suspect, have a preference for one mode or the other, but don’t have psyches rigidly anchored to either blithe spiritedness or heavy soulfulness. Depending on the dumpster-fieriness of the world, most people can experience either mode of being. I suspect most of us are naturally blithe-spirited, just as most of us are naturally right-handed. But we can all experience heavy souls, just as we can all use our left hands. And as with left-handed behaviors, heavy souled behaviors are awkward, janky, and unnatural for most people.

I certainly am naturally blithe-spirited. I don’t go out of my way to see the world as a dumpster fire, even if I don’t go out of my way to retreat from a pressing sense that it is. The preference is perhaps not as strong as my right-handedness. I can only write slowly and very badly with my left hand, but I can write tolerably well in heavy-soul mode. Almost as well as I can in a blithe-spirit mode.

This understanding feels correct to me. Writing today feels unsatisfying today in the same way using my left hand for a task requiring any sort of dexterity does. The fingerspitzengefühl is off because I’m using fingerspitzen I’m not used to using. I’m not feeling it, because I don’t often use this hand, and the finger-tips haven’t developed as much sensitivity.

But I’m getting slightly more comfortable with each essay. And the resulting posture in the world feels slightly less wrong each time. I don’t think I’d go back to a blithe spirit posture even if I could. Any more than I’d go back to right-handedness if I managed to practice my way into full ambidexterity.

The world is changing, and it feels right to change with it. And if the appropriate posture for experiencing the Permaweird is a heavy-souled one, so be it.

Monday, October 02, 2023

Fluid Fogs and Fixed Flows

For the MindBlog readers who may have have noted my  "This is the New 'Real World' post and followed me down the rabbit hole of Ventkatesh Rao’s narrative worlds, I pass on the following abstracting  of his 9/23/2023  installment, titled “Fluid Fogs and Fixed Flows,”  which reduces its length by half. I have done this mainly for my own use, to facilitate my future recall of his ideas:

Worlds and Real World

To briefly recap last week’s essay, I’m using world and real world in the sense of last week’s essay: A world is a coherent, immersive, totalizing subjectivity you can inhabit, as a sort of cognitive indoors. The real world is the corresponding outdoors — the messy union of the dozen or so most consequential worlds in existence at any given time.

The process by which the real world emerges, as a negotiation among worlds, is one that makes it qualitatively different. In brief, regular worlds are finite games, while the real world is the infinite game.

Weirdness, Fog, and Unnarratability

The relationship between weirdness, brain fog, and unnarratability is something like the relationship between a crisis, the panic it induces, and the solvability of the crisis.

World-brain fog affects those in a given world. Real-world-brain fog affects everybody. For us individual sentient elements of these world-brains, this fog manifests as the spectacle of history becoming incoherent.

Fog vs. Flow

The opposite of brain fog is flow. When thoughts flow easily, clearly, and tastefully, from one idea to the right next idea...Where the one-step-at-a-time ethos I identified earlier in this series as the essence of never-ending stories is not just all you can live by, it’s all you need to live by.

To be clear, I’m not saying fog is bad and flow is good. That would be like saying clear weather is good and storms are bad. This is just a pair of opposed subjective cognitive conditions. Setting aside valuative judgments, my claim is that the real-world-brain is in a state of fog rather than flow right now.

To say that the real world is suffering from world-brain fog is to say that the infinite game is in narrative trouble, regardless of the prevailing calculus of winners and losers in the constituent finite games. The question of winners and losers is distinct from the question of whether the game can be continued. The real world being foggy means it is hard to see how to continue the game, whether you’re currently winning or losing any specific game.

Okay, so that’s the big thesis here: history feels far more unnarratable than it needs to, because the real world is suffering from world-brain fog. If we can get rid of this fog, we’ll still have to deal with the objective realities of the permaweird, but that might be an easier task.

Individual Fogs

To think through the idea of a foggy real-world brain, it’s useful to think about the more familiar individual-brain phenomenon.

I’ll use myself as an example to analyze these factors...Looking back 10 years at my 2013 archives, 2023’s output of words feels like a congealed sludge by comparison. ..The sludginess of 2023 seems to afflict all words being produced by everybody.

In the last couple of years, this god-like situation awareness of the broad currents of my own thought has become dissipated, fragmented, and yes, foggy. I often forget obvious connections I’ve made in the past, or worse, fail to make them in the first place. Sometimes I forget entire trails of thought I’ve gone down, over multiple essays. Sometimes I clumsily re-think thoughts I’ve previously thought in more elegant ways. There is no sense of a longer compounding journey unfolding over years and millions of words. Instead, there is a sense of a random walk comprising individual short outings of a few thousand words. When the fog is at its worst, the 2 million words seem like so much rubble.

So my individual brain fog in the sense of such missed connections and missed opportunities for emergence is bad for the kind of thinking and writing I do. The fog/flow pair is neutral, but for certain kinds of activity, such as thinking and writing in hypertext-augmented ways, fog is very bad. Just like literal fog is very bad for ships trying to navigate treacherous waters.

The largest fraction of the value of writing online in a densely internally linked way³ lies in the larger structures of thought that emerge. I’ve previously jokingly referred to my personal instance of this as the “extended ribbonfarm blogamatic universe,” but more seriously and generally, we might call these personal protocol narratives. It’s a particular way of being in the infinite game, one step at a time, that’s available to writers. Anyone who writes primarily online, with a lot of nonlinear internal hyperlinking, has a personal protocol narrative by default. Traditional writers usually don’t, unless they work extra hard for it⁴ (something I'm too lazy to do, which makes me think in a non-internet world, I wouldn’t be a writer).

This superpower is the reason people like me eschew traditional media and book-writing and largely stick to blogs, microblogs, and newsletters. Not only is the emergent landscape the bulk of the value, it is the main enabling factor in its own creation. I can write in certain ways because I have this evolving canvas doing most of the work. If this emergent landscape of thought starts to disappear, the whole thing falls apart.

And while hypertext is a powerful brain-augmentation technology, it can’t defend against all cognitive afflictions. In particular, brain fog is deadly. It weakens your ability to make new internal links, and as a result makes the connected landscape less connected, and therefore both less useful, and less usable. Brain fog drives a vicious cycle of degeneration towards a more primitive textuality. At some point, I might have no technical advantage at all over book-writing cavemen or even typewriter-wielding Neanderthals.

Entangled Fogs

Some technologies are simply foggier than others…mail newsletter platforms are much foggier than blogs…blogs simply want to create rich internal linking..I use an order of magnitude fewer links in newsletters than in blog posts. I know this because I still retain stronger gestalt memories of my blog archives than my newsletter archives.

Biology and technology conspire to create brain fog in messy ways. When I got Covid a year ago, and experienced a few months of a more biological style brain fog, writing in my peculiar way felt insanely difficult, and what writing I was able to do was much more disconnected than my norm…much of brain fog can and should be attributed to factors in the environment. Just as your panic at being caught in a fire isn’t entirely in your head — there is actually a fire — brain fog isn’t all in your head: you’re in a foggy condition. You’re in an unnarratable world. The stories that you want to tell, and are used to telling, are suddenly less tellable.

This is where the entanglement with world-brain fog comes in.

Accounting for age, medium, and Covid-type effects, I think there remains a large unexplained factor in every case, though the fraction varies….I think there is something going on at the cultural, societal level, that makes it vastly harder to remember the gist of large bodies of information…But if I am right, unnarratability and world-brain fog should affect everybody, regardless of age and occupation, and I think I see signs everywhere that this is the case.

Fixed and Fluid Logics of Caring

Now we can ask the question. What does it mean for a world, specifically the real world to experience something analogous to what I just described at the individual level? What is world-brain fog? …And since there is nobody “there” to experience it, how does it manifest in the lives of us individuals who are like the neurons of the world brains we inhabit?

We’ve already seen one element of what it feels like. A sense that there’s more fogginess than you can attribute to your own circumstances…Here’s another: it’s hard to decide what to care about. Logics of caring are in fact essential in creating flow out of fog. The world is always complex. What you care about is what determines how simple it is for you. How you pick what to care about is your logic of caring.

…you might want a locus of care that is both stable, and world-like. This disposition is what I’m calling fixed logic of care…People with fixed logics of care love to talk about values, because that’s where the fixedness manifests explicitly.

…you might want a locus of care that follows the liveliest streams of events in the world. …You want to be where the action is, not where your values point. This disposition is a fluid logic of care.

fixed/fluid not the same as conservative/liberal,traditional/progressive, winning/losing

It might seem like I’ve set up an argument that admits no world-scale flow at all for either fixed or fluid logics of caring. This is incorrect. A few well-defined groups sneak through this sieve of constraints and appear to be experiencing world-scale flow. All of them operate by fixed logics of caring, but also have an additional factor in common: they rest atop what I call interposing intelligences.

Interposing Intelligences

The first well-defined group that seems to have retained a sense of world-scale flow is economists…anyone for whom the the global economy is the primary sense-making lens on reality…it’s all just been a game of watching various numbers go up and down in a curiously ahistorical mirror world. In that mirror world, there has been no Great Weirding.

There’s a reason for this. The economy offers one of the few remaining world-scale fixed logics of caring. To care through that logic about anything in the world at all is to care about it in economic terms. There’s even a term for this operation of bringing a matter into the fixed logic of care: “pricing it in.” To the economist-mind, economics is the primary phenomenological ground of the world. Things don’t become real until they become economically real. Intentions don’t become real until they become revealed preferences. Narratives don’t become real until they show up in indicators.

Now this is interesting. Economics seems to function in modernity as a better religion than actual religions. It allows you to have a sense of inhabiting the world rather than a besieged, paranoid corner of it. It allows you to care about the world in a fixed way, while still keeping up reasonably with its fluid, dynamic, changing nature. What it cannot accommodate, it can confidently treat as noise.

Unlike the changeless, distant gods or Gods of traditional religions, the God of economics is a live intelligence, doing all the fluid thinking so you can care in fixed ways. And it’s obvious why it’s able to do that. The economy is, after all, the closest thing to a live, planet-scale artificial intelligence.A different way to think about this helps generalize from this one data point. Economics provides a fixed logic of caring despite a complex, Permaweird world because it rests atop a vast, interposing⁵ intelligence that processes or tunes out most of the weirdness. A kind of intelligence that religion once embodied in a slower, less weird era. A Turing-complete pre-processing/post-processing layer, mediating your relationship to reality. I’m using the term interposing intelligence rather than container or matrix because the mediation has a more open and leaky topology. It allows you to compute with reality data more easily, but doesn’t necessarily seal you off in a bubble reality. Interposing intelligences are more like slippers than spacesuits; more like gardens than arcologies.

The cryptoeconomy is another obvious example, with blockchains playing the role of the interposing intelligence.

A third world is the world of machine learning, which is a rather on-the-nose kind of interposing intelligence layer. … There is a new world of people being born, whose relationship to reality is beginning to be entirely mediated by the interposing intelligence of machine learning.  

A fourth world is perhaps the oldest: the global institutional landscape peopled by careerists devoted to individual institutions. It’s not as obvious as in the case of the economy, but the institutional world (which its critics often refer to as the global Deep State) and its inhabitants (whom critics tend to refer to uniformly as “bureaucrats”) is in fact a world-scale computer that sustains a fixed logic of caring within itself. Shorn of the conspiratorial fantasies of critics, deep state is not a bad term for it.

Is there a way to hold on to a fixed logic of caring, without retreating from the world, and without resting on top of an interposing intelligence? I don’t think this is possible anymore.

Find Fluidity

The problem with everybody switching to fixed logics of caring is that it doesn’t solve the fogginess of the real world. In fact, even if all dozen or so consequential worlds that make up the real world were to harden into de facto fixed-logics-of-caring worlds that individually found flow within, you would still not be free of the fog in the real world. Combating fog in the real world requires at least a fraction of humanity operating by fluid logics of caring.

to want a fluid logic of care is to want “a locus of care that follows the liveliest streams of events in the world. …it used to work well until about 2015,

You could care about tech, for example. What was good for tech was good for the world, and vice versa. But unlike economics, tech does not offer a fixed logic for how to care.
Cosmopolitan globalism was another. Pre-wokism social justice was a third. Following basic scientific advances was a fourth.

But all these examples have “failed” in a certain way since 2015. You can still operate by them, but you will get lost in fog and lose all sense of flow. As a result, all these example worlds have succumbed to sclerotic fixed logics imported from adjacent domains. Technology is increasingly applied investment economics. Cosmopolitan globalism and social justice are now both applied Deep Statisms. No doubt other once-fluid logics of caring will get “compiled,” as it were, to fixed logics of caring running atop interposing intelligence layers.

So is there a way to retain a fluid logic of caring?
Reality — and this time I mean material reality — does indeed have a liberal bias in a rather precise
sense: it requires fluid logics of caring to de-fog. A logic of caring that follows the action instead of being driven by values.

No combination of fixed logics of caring will do the trick. Nor will operating atop a fixed interposing intelligence layer.

Multiple Interposing Intelligences

My big takeaway from the analysis so far is this: there is no way to retain flow in the world today without augmenting your intelligence in some way. This is evident even in my personal, rather primitive case of using hypertext as an integral element of my writing and sensemaking.

This is why all known examples of worlds in flow today rest atop powerful interposing intelligence layers that mediate relations to reality: the economy, blockchains, AI itself, and institutions. But the inescapable cost of this seems to be that fluid logics of caring become fixed, and our sense of the real world, as opposed to our favored individual ones, becomes vulnerable to fog.

To retain fluidity, you must retain an unmediated connection to reality. But the unaugmented brain is clearly not enough for that connection to be tractable to manage.

How do you resolve this paradox?

I think the trick is to inhabit more than one interposing intelligence layer. If you’re only an economist or only a deep-state institutionalist, you’ll retreat to a fixed logic of caring; a terminal derp.

But if you’re both, the possibility of fluid logics of caring remains, because the two interposing varieties of intelligence are not compatible. Sometimes they will be in conflict when they try to mediate your presence in the world. And in that conflict, you will find just enough of an unmediated connection to reality to continue caring about the world in a fluid way, without becoming overwhelmed by complexity.

A specific example of this is thinking about holding the stock of a company you work for. Both economic and institutional logics of caring apply, but neither provides a complete answer to the question of how much of the stock to hold, and when to sell. The two fixed answers will likely be incompatible, so you’ll need a fluid logic to integrate them. If you’re in the public sector, voting on taxes creates a similar tension.

I listed 4 world-scale interposing intelligences earlier, and each pairing seems to work well. Cryptoeconomics and traditional economics seem caught in a dialectic of discovering each other’s fundamental flaws and destablizing each other. Machine learning and blockchains seem headed for a collision via zero-knowledge proof technologies. Institutionalism and blockchains seem headed for a collision via smart contract technology. Institutionalism and economics have been the locus of the familiar Left/Right tension across the world.

I’ll let you work out the other combinations, but if you’ve tried thinking about the world through any two of the available interposing intelligences, you’ll realize how difficult it is. Difficult, but it’s possible. And at least in my case, the more I practice, the better I get at this (I try to straddle all four of the ones I’ve listed).

Why does this work? Why does it serve to “continue the game” in infinite game terms? One way to think about it is to think about life in terms of step-by-step decisions.

If you live within a traditional world that does not supply an interposing intelligence layer at all, you will mostly not have any decision-support at all that can keep up. Your decisions outside your shrinking world will be random and chaotic. Your instinct will be to restrict scope until all decisions are within the scope of your logic of caring, whether fluid or fixed.

If you live atop a single interposing intelligence, you will always have meaningful decision-support within a fixed logic of caring. You’ll have a take on everything, nad feel in flow within your world, but have a sense of the “real world” you share with others being in a state of insane chaos. It would all make sense and flow beautifully if only those others stopped being stupid and joined your world.

But if you live atop more than one interposing intelligence, you will have to choose at every step whether to tap into one of the available fixed logics of caring (picking a side), or improvising your own choice. In the latter case, your thinking will leak through and connect to reality in an unmediated way. If you’re able to do this consistently, you will likely experience increasing amounts of flow, and also beat back the fogginess of the real world, not just your own world.

And this notion of straddling a sort of plate-tectonics of multiple interposing intelligences, with gaps, faultlines and inconsistencies, is the reason the resulting narrative is a kind of protocol narrative. The narrative of the real world emerges out of an interoperable network of world narratives. And through the conflicts between worlds, the infinite game keeps renewing itself.

But it takes a critical mass of humans operating by fluid logics of caring for this to happen. And until that critical mass is reached, the real world will remain foggy for everybody. And trying to be in that minority will be a thankless and stressful task, immersed in fog.
But then again, public service has never been an easy calling.