Monday, October 02, 2023

Fluid Fogs and Fixed Flows

For the MindBlog readers who may have have noted my  "This is the New 'Real World' post and followed me down the rabbit hole of Ventkatesh Rao’s narrative worlds, I pass on the following abstracting  of his 9/23/2023  installment, titled “Fluid Fogs and Fixed Flows,”  which reduces its length by half. I have done this mainly for my own use, to facilitate my future recall of his ideas:

Worlds and Real World

To briefly recap last week’s essay, I’m using world and real world in the sense of last week’s essay: A world is a coherent, immersive, totalizing subjectivity you can inhabit, as a sort of cognitive indoors. The real world is the corresponding outdoors — the messy union of the dozen or so most consequential worlds in existence at any given time.

The process by which the real world emerges, as a negotiation among worlds, is one that makes it qualitatively different. In brief, regular worlds are finite games, while the real world is the infinite game.

Weirdness, Fog, and Unnarratability

The relationship between weirdness, brain fog, and unnarratability is something like the relationship between a crisis, the panic it induces, and the solvability of the crisis.

World-brain fog affects those in a given world. Real-world-brain fog affects everybody. For us individual sentient elements of these world-brains, this fog manifests as the spectacle of history becoming incoherent.

Fog vs. Flow

The opposite of brain fog is flow. When thoughts flow easily, clearly, and tastefully, from one idea to the right next idea...Where the one-step-at-a-time ethos I identified earlier in this series as the essence of never-ending stories is not just all you can live by, it’s all you need to live by.

To be clear, I’m not saying fog is bad and flow is good. That would be like saying clear weather is good and storms are bad. This is just a pair of opposed subjective cognitive conditions. Setting aside valuative judgments, my claim is that the real-world-brain is in a state of fog rather than flow right now.

To say that the real world is suffering from world-brain fog is to say that the infinite game is in narrative trouble, regardless of the prevailing calculus of winners and losers in the constituent finite games. The question of winners and losers is distinct from the question of whether the game can be continued. The real world being foggy means it is hard to see how to continue the game, whether you’re currently winning or losing any specific game.

Okay, so that’s the big thesis here: history feels far more unnarratable than it needs to, because the real world is suffering from world-brain fog. If we can get rid of this fog, we’ll still have to deal with the objective realities of the permaweird, but that might be an easier task.

Individual Fogs

To think through the idea of a foggy real-world brain, it’s useful to think about the more familiar individual-brain phenomenon.

I’ll use myself as an example to analyze these factors...Looking back 10 years at my 2013 archives, 2023’s output of words feels like a congealed sludge by comparison. ..The sludginess of 2023 seems to afflict all words being produced by everybody.

In the last couple of years, this god-like situation awareness of the broad currents of my own thought has become dissipated, fragmented, and yes, foggy. I often forget obvious connections I’ve made in the past, or worse, fail to make them in the first place. Sometimes I forget entire trails of thought I’ve gone down, over multiple essays. Sometimes I clumsily re-think thoughts I’ve previously thought in more elegant ways. There is no sense of a longer compounding journey unfolding over years and millions of words. Instead, there is a sense of a random walk comprising individual short outings of a few thousand words. When the fog is at its worst, the 2 million words seem like so much rubble.

So my individual brain fog in the sense of such missed connections and missed opportunities for emergence is bad for the kind of thinking and writing I do. The fog/flow pair is neutral, but for certain kinds of activity, such as thinking and writing in hypertext-augmented ways, fog is very bad. Just like literal fog is very bad for ships trying to navigate treacherous waters.

The largest fraction of the value of writing online in a densely internally linked way³ lies in the larger structures of thought that emerge. I’ve previously jokingly referred to my personal instance of this as the “extended ribbonfarm blogamatic universe,” but more seriously and generally, we might call these personal protocol narratives. It’s a particular way of being in the infinite game, one step at a time, that’s available to writers. Anyone who writes primarily online, with a lot of nonlinear internal hyperlinking, has a personal protocol narrative by default. Traditional writers usually don’t, unless they work extra hard for it⁴ (something I'm too lazy to do, which makes me think in a non-internet world, I wouldn’t be a writer).

This superpower is the reason people like me eschew traditional media and book-writing and largely stick to blogs, microblogs, and newsletters. Not only is the emergent landscape the bulk of the value, it is the main enabling factor in its own creation. I can write in certain ways because I have this evolving canvas doing most of the work. If this emergent landscape of thought starts to disappear, the whole thing falls apart.

And while hypertext is a powerful brain-augmentation technology, it can’t defend against all cognitive afflictions. In particular, brain fog is deadly. It weakens your ability to make new internal links, and as a result makes the connected landscape less connected, and therefore both less useful, and less usable. Brain fog drives a vicious cycle of degeneration towards a more primitive textuality. At some point, I might have no technical advantage at all over book-writing cavemen or even typewriter-wielding Neanderthals.

Entangled Fogs

Some technologies are simply foggier than others…mail newsletter platforms are much foggier than blogs…blogs simply want to create rich internal linking..I use an order of magnitude fewer links in newsletters than in blog posts. I know this because I still retain stronger gestalt memories of my blog archives than my newsletter archives.

Biology and technology conspire to create brain fog in messy ways. When I got Covid a year ago, and experienced a few months of a more biological style brain fog, writing in my peculiar way felt insanely difficult, and what writing I was able to do was much more disconnected than my norm…much of brain fog can and should be attributed to factors in the environment. Just as your panic at being caught in a fire isn’t entirely in your head — there is actually a fire — brain fog isn’t all in your head: you’re in a foggy condition. You’re in an unnarratable world. The stories that you want to tell, and are used to telling, are suddenly less tellable.

This is where the entanglement with world-brain fog comes in.

Accounting for age, medium, and Covid-type effects, I think there remains a large unexplained factor in every case, though the fraction varies….I think there is something going on at the cultural, societal level, that makes it vastly harder to remember the gist of large bodies of information…But if I am right, unnarratability and world-brain fog should affect everybody, regardless of age and occupation, and I think I see signs everywhere that this is the case.

Fixed and Fluid Logics of Caring

Now we can ask the question. What does it mean for a world, specifically the real world to experience something analogous to what I just described at the individual level? What is world-brain fog? …And since there is nobody “there” to experience it, how does it manifest in the lives of us individuals who are like the neurons of the world brains we inhabit?

We’ve already seen one element of what it feels like. A sense that there’s more fogginess than you can attribute to your own circumstances…Here’s another: it’s hard to decide what to care about. Logics of caring are in fact essential in creating flow out of fog. The world is always complex. What you care about is what determines how simple it is for you. How you pick what to care about is your logic of caring.

…you might want a locus of care that is both stable, and world-like. This disposition is what I’m calling fixed logic of care…People with fixed logics of care love to talk about values, because that’s where the fixedness manifests explicitly.

…you might want a locus of care that follows the liveliest streams of events in the world. …You want to be where the action is, not where your values point. This disposition is a fluid logic of care.

fixed/fluid not the same as conservative/liberal,traditional/progressive, winning/losing

It might seem like I’ve set up an argument that admits no world-scale flow at all for either fixed or fluid logics of caring. This is incorrect. A few well-defined groups sneak through this sieve of constraints and appear to be experiencing world-scale flow. All of them operate by fixed logics of caring, but also have an additional factor in common: they rest atop what I call interposing intelligences.

Interposing Intelligences

The first well-defined group that seems to have retained a sense of world-scale flow is economists…anyone for whom the the global economy is the primary sense-making lens on reality…it’s all just been a game of watching various numbers go up and down in a curiously ahistorical mirror world. In that mirror world, there has been no Great Weirding.

There’s a reason for this. The economy offers one of the few remaining world-scale fixed logics of caring. To care through that logic about anything in the world at all is to care about it in economic terms. There’s even a term for this operation of bringing a matter into the fixed logic of care: “pricing it in.” To the economist-mind, economics is the primary phenomenological ground of the world. Things don’t become real until they become economically real. Intentions don’t become real until they become revealed preferences. Narratives don’t become real until they show up in indicators.

Now this is interesting. Economics seems to function in modernity as a better religion than actual religions. It allows you to have a sense of inhabiting the world rather than a besieged, paranoid corner of it. It allows you to care about the world in a fixed way, while still keeping up reasonably with its fluid, dynamic, changing nature. What it cannot accommodate, it can confidently treat as noise.

Unlike the changeless, distant gods or Gods of traditional religions, the God of economics is a live intelligence, doing all the fluid thinking so you can care in fixed ways. And it’s obvious why it’s able to do that. The economy is, after all, the closest thing to a live, planet-scale artificial intelligence.A different way to think about this helps generalize from this one data point. Economics provides a fixed logic of caring despite a complex, Permaweird world because it rests atop a vast, interposing⁵ intelligence that processes or tunes out most of the weirdness. A kind of intelligence that religion once embodied in a slower, less weird era. A Turing-complete pre-processing/post-processing layer, mediating your relationship to reality. I’m using the term interposing intelligence rather than container or matrix because the mediation has a more open and leaky topology. It allows you to compute with reality data more easily, but doesn’t necessarily seal you off in a bubble reality. Interposing intelligences are more like slippers than spacesuits; more like gardens than arcologies.

The cryptoeconomy is another obvious example, with blockchains playing the role of the interposing intelligence.

A third world is the world of machine learning, which is a rather on-the-nose kind of interposing intelligence layer. … There is a new world of people being born, whose relationship to reality is beginning to be entirely mediated by the interposing intelligence of machine learning.  

A fourth world is perhaps the oldest: the global institutional landscape peopled by careerists devoted to individual institutions. It’s not as obvious as in the case of the economy, but the institutional world (which its critics often refer to as the global Deep State) and its inhabitants (whom critics tend to refer to uniformly as “bureaucrats”) is in fact a world-scale computer that sustains a fixed logic of caring within itself. Shorn of the conspiratorial fantasies of critics, deep state is not a bad term for it.

Is there a way to hold on to a fixed logic of caring, without retreating from the world, and without resting on top of an interposing intelligence? I don’t think this is possible anymore.

Find Fluidity

The problem with everybody switching to fixed logics of caring is that it doesn’t solve the fogginess of the real world. In fact, even if all dozen or so consequential worlds that make up the real world were to harden into de facto fixed-logics-of-caring worlds that individually found flow within, you would still not be free of the fog in the real world. Combating fog in the real world requires at least a fraction of humanity operating by fluid logics of caring.

to want a fluid logic of care is to want “a locus of care that follows the liveliest streams of events in the world. …it used to work well until about 2015,

You could care about tech, for example. What was good for tech was good for the world, and vice versa. But unlike economics, tech does not offer a fixed logic for how to care.
Cosmopolitan globalism was another. Pre-wokism social justice was a third. Following basic scientific advances was a fourth.

But all these examples have “failed” in a certain way since 2015. You can still operate by them, but you will get lost in fog and lose all sense of flow. As a result, all these example worlds have succumbed to sclerotic fixed logics imported from adjacent domains. Technology is increasingly applied investment economics. Cosmopolitan globalism and social justice are now both applied Deep Statisms. No doubt other once-fluid logics of caring will get “compiled,” as it were, to fixed logics of caring running atop interposing intelligence layers.

So is there a way to retain a fluid logic of caring?
Reality — and this time I mean material reality — does indeed have a liberal bias in a rather precise
sense: it requires fluid logics of caring to de-fog. A logic of caring that follows the action instead of being driven by values.

No combination of fixed logics of caring will do the trick. Nor will operating atop a fixed interposing intelligence layer.

Multiple Interposing Intelligences

My big takeaway from the analysis so far is this: there is no way to retain flow in the world today without augmenting your intelligence in some way. This is evident even in my personal, rather primitive case of using hypertext as an integral element of my writing and sensemaking.

This is why all known examples of worlds in flow today rest atop powerful interposing intelligence layers that mediate relations to reality: the economy, blockchains, AI itself, and institutions. But the inescapable cost of this seems to be that fluid logics of caring become fixed, and our sense of the real world, as opposed to our favored individual ones, becomes vulnerable to fog.

To retain fluidity, you must retain an unmediated connection to reality. But the unaugmented brain is clearly not enough for that connection to be tractable to manage.

How do you resolve this paradox?

I think the trick is to inhabit more than one interposing intelligence layer. If you’re only an economist or only a deep-state institutionalist, you’ll retreat to a fixed logic of caring; a terminal derp.

But if you’re both, the possibility of fluid logics of caring remains, because the two interposing varieties of intelligence are not compatible. Sometimes they will be in conflict when they try to mediate your presence in the world. And in that conflict, you will find just enough of an unmediated connection to reality to continue caring about the world in a fluid way, without becoming overwhelmed by complexity.

A specific example of this is thinking about holding the stock of a company you work for. Both economic and institutional logics of caring apply, but neither provides a complete answer to the question of how much of the stock to hold, and when to sell. The two fixed answers will likely be incompatible, so you’ll need a fluid logic to integrate them. If you’re in the public sector, voting on taxes creates a similar tension.

I listed 4 world-scale interposing intelligences earlier, and each pairing seems to work well. Cryptoeconomics and traditional economics seem caught in a dialectic of discovering each other’s fundamental flaws and destablizing each other. Machine learning and blockchains seem headed for a collision via zero-knowledge proof technologies. Institutionalism and blockchains seem headed for a collision via smart contract technology. Institutionalism and economics have been the locus of the familiar Left/Right tension across the world.

I’ll let you work out the other combinations, but if you’ve tried thinking about the world through any two of the available interposing intelligences, you’ll realize how difficult it is. Difficult, but it’s possible. And at least in my case, the more I practice, the better I get at this (I try to straddle all four of the ones I’ve listed).

Why does this work? Why does it serve to “continue the game” in infinite game terms? One way to think about it is to think about life in terms of step-by-step decisions.

If you live within a traditional world that does not supply an interposing intelligence layer at all, you will mostly not have any decision-support at all that can keep up. Your decisions outside your shrinking world will be random and chaotic. Your instinct will be to restrict scope until all decisions are within the scope of your logic of caring, whether fluid or fixed.

If you live atop a single interposing intelligence, you will always have meaningful decision-support within a fixed logic of caring. You’ll have a take on everything, nad feel in flow within your world, but have a sense of the “real world” you share with others being in a state of insane chaos. It would all make sense and flow beautifully if only those others stopped being stupid and joined your world.

But if you live atop more than one interposing intelligence, you will have to choose at every step whether to tap into one of the available fixed logics of caring (picking a side), or improvising your own choice. In the latter case, your thinking will leak through and connect to reality in an unmediated way. If you’re able to do this consistently, you will likely experience increasing amounts of flow, and also beat back the fogginess of the real world, not just your own world.

And this notion of straddling a sort of plate-tectonics of multiple interposing intelligences, with gaps, faultlines and inconsistencies, is the reason the resulting narrative is a kind of protocol narrative. The narrative of the real world emerges out of an interoperable network of world narratives. And through the conflicts between worlds, the infinite game keeps renewing itself.

But it takes a critical mass of humans operating by fluid logics of caring for this to happen. And until that critical mass is reached, the real world will remain foggy for everybody. And trying to be in that minority will be a thankless and stressful task, immersed in fog.
But then again, public service has never been an easy calling.


No comments:

Post a Comment