This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Sunday, November 26, 2023
Religious wars in the tech industry.
Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.
Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.
Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.
Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.
Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.
Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.
It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.
In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.
Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.
The Six Religions
Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.
The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.
The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.
The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.
The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.
Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?
The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.
The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…
The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.
What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.
Friday, September 22, 2023
This is the New 'Real World'
For my own later reference, and hopefully of use to a few MindBlog readers, I have edited, cut and pasted, and condensed from 3960 to 1933 words the latest brilliant article generated by Venkatesh Rao at https://studio.ribbonfarm.com/:
The word world, when preceded by the immodest adjective real, is a self-consciously anthropocentric one, unlike planet, or universe. To ask, what sort of world do we live in invites an inherently absurd answer when we ponder what kind of world we live in. but if enough people believe in an absurd world, absurd but consequential histories will unfold. And consequentiality, if not truth, perhaps deserves the adjective real.
Not all individual worlds that in principle contribute to the real world are equally consequential… A familiar recent historical real world, the neoliberal world, was shaped more by the beliefs of central bankers than by the beliefs of UFO-trackers. You could argue that macroeconomic theories held by central bankers are not much less fictional than UFOs. But worlds built around belief in specific macroeconomic theories mattered more than ones built around belief in UFOs. In 2003 at least, it would have been safe to assume this - it is no longer a safe assumption in 2023.
Of the few hundred consciously shared worlds like religions, fandoms, and nationalisms that are significant, perhaps a couple of dozen matter strongly, and perhaps a dozen matter visibly, the other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.
This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives…The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.
The fact that some worlds survive a brutal winnowing process does not alter the fact that they remain anthropocentric is/ought conceits … A world that has made the cut to significance and consequentiality, to the level of mattering, must still survive its encounters with material, as opposed to social realities... For all the consequential might of the Catholic Church in the 17th century, it was Galileo’s much punier Eppur si muove world that eventually ended up mattering more. Truth eventually outweighed short-term consequentiality in the enduring composition of real.
It would take a couple of centuries for Galileo’s world to be counted among the ones that mattered, in shaping the real world. And the world of the Catholic Church, despite centuries of slow decline still matters..It is just that the real world has gotten much bigger in scope, and other worlds constituting it, like the one shaping the design of the iPhone 15, matter much more.
…to answer a question like what sort of world do we live in? is to craft an unwieldy composite portrait out of the dozen or so constituent worlds that matter at any given time …it is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing… the “real” world is not necessarily any more real than private fantasies. It is merely vastly more consequential — for a while.
When “the real world” goes away because we’ve stopped believing in it, as tends to happen every few decades, it can feel like material reality itself, rather than a socially constructed state of mind, has come undone. And we scramble to construct a new real world. It is a necessary human tendency. Humans need a real world to serve as a cognitive “outdoors” (and escape from “indoors”), even if they are not eternal or true. A shared place we can accuse each other of not living in, and being detached from…Humans will conspire to cobble together a dozen new fantasies and label it real world, and you and I will have to live in it too.
So it is worth asking the question, what sort of world do we live in? And it is worth actually constructing the answer, and giving it the name the real world, and using it to navigate life — for a while.
So let’s take a stab at it.
The real world of the early eighties was one defined by the boundary conditions of the Cold War, an Ozone hole, PCs, video games, Michael Jackson, a pre-internet social fabric, and no pictures of Uranus, Neptune, Pluto, or black holes shaping our sense of the place of our planet within the broader cosmos.
The real world that took shape in the nineties, the neoliberal world to which Margaret Thatcher declared there is no alternative (TINA), was one defined by the rise of the internet, unipolar geopolitics, the economic ascent of China, The Simpsons, Islamic terrorism, and perhaps most importantly, a sense of politics ceasing to matter against the backdrop of an unstoppable increase in global prosperity.
That real world began to wobble after 9/11, bust critical seams during the Great Recession, and started to go away in earnest after 2015, in the half-decade, which ended with the pandemic. The passing of the neoliberal world was experienced as a trauma across the world, even by those who managed to credibly declare themselves winners.
What has taken shape in the early 20s defies a believable characterization as real, for winners and losers alike. Declaring it weird studiously avoids assessments of realness. Some, like me, go further and declare the world to be permaweird…the weirdness is here to stay.
Permaweird does not mean perma-unreal. The elusiveness of a “New Normal” does not mean no “New Real” can emerge, out of new, and learnable, patterns of agency and consequentiality…the forces shaping the New Real are becoming clear. Here is a list off the top of my head. It should be entirely unsurprising.
1 Energy transition
2 Aging population
3 Weird weather
4 Machine learning
5 Memefied politics
6 The slowing of Moore’s Law
7 Meaning crises (plural)
8 Stagnation of the West
9 Rise of the Rest
10 Post-ZIRP economics
11 Post-Covid supply chains
12 Climate refugee movements
You will notice that none the forces on the list is particularly new or individually very weird. What’s weird is the set as a whole, and the difficulty of putting them together into a notion of normalcy.
Forces though, are not worlds. We may trade in our gasoline-fueled cars for EVs, but we do not inhabit “the energy transition” the way we inhabit a world-idea like “neoliberalism” or “religion.”
Sometimes forces directly translate into consequential worlds. In the 1990s, the internet was a force shaping the real world, and also created a world — the inhabitable world of the very online — that was part of the then-emerging sense of “real.”
Sometimes forces indirectly create worlds. Low-interest rates created another important constituent world of the Old Real …Vast populations in liberal urban enclaves lived out ZIRPy lifestyles, eating their avocado toast, watching TED talks, riding sidewalk scooters, producing “content”, and perversely refusing to be rich enough to buy homes.
Something similar appears to be happening in response to the force of post-ZIRP economics. The public internet, dominated by vast global melting-pot platforms featuring vast culture wars, appears to be giving way to a mix of what I’ve called cozyweb enclaves and protocol media,…This world too, will be positioned to consequentially shape the New Real as strongly as the very online world shaped the Old Real.
I won’t try to provide careful arguments here, or justify my speculative inventory of forces, but here is my list of resulting worlds being carved out by them, which I have arrived at via a purely impressionistic leap of attempted synthesis. Together, these worlds constitute the New Real:
1 Climate refugee world
2 Disaster world (the set of places currently experiencing disaster conditions)
3 Dark Forest online world
4 Death-star world (centered on the assemblage of spaces controlled by declining wealth or power concentrations)
5 Non-English civilizational worlds (including Chinese and Indian)
6 Weird weather worlds
7 Non-institutional world (including, but not limited to, free-agent and blockchain-based worlds)
8 Trad Retvrn LARP world
9 Retiree world
10 Silicon realpolitik world
11 AI-experienced world
12 Resource-localism world (set of spaces shaped by a dominant scarce resource like energy or water)
These worlds are worlds because it is possible to imagine lifestyles entirely immersed in them. They are consequential worlds because each already has enough momentum and historical leverage to reshape the composite understanding of real. What climate refugees do in climate refugee world will shape what all of us do in the real world.
World 4 is worth some elaboration. In it I include almost everything that dominates current headlines and feels “real,” including spaces dominated by billionaires, governments, universities, and traditional media. Yet, despite the degree to which it dominates the current distribution of attention, my sense is that it has only a small and diminishing role to play in defining the New Real. When we use the phrase in the real world in the coming decade, we will not mainly be referring to World 4.
World 11 is also worth some elaboration. One reason I believe weirdness is here to stay is that the emerging ontologies of the New Real are neither entirely human in origin, nor likely to respect human desires for common-sense conceptual stability in “reality.
For the moment, AIs inhabit the world on our terms, relating to it through our categories. But it is already clear that they are not restricted to human categories, or even to categories expressible within human languages. Nor should they be, if we are to tap into their powers. They are limited by human ontology only to the extent that their presence in the world must be mediated by humans. … they will definitely evolve in ways that keep the real world permaweird.
Can we slap on a usefully descriptive short label onto the New Real, comparable to “Neoliberal World” or “Cold War World”?
There is no such obviously dominant eigenvector of consequentiality in the New Real, but the most obvious candidate is probably global warming. So we might call the New Real the warming world. Somehow though, it doesn’t feel like warming shapes our experience of realness as clearly as its predecessors. Powerful though the calculus of climate change is, it operates via too many subtle degrees of indirection to shape our sense of the real. Still, I’ll leave the phrase there for your consideration.
An idiosyncratic personal candidate … is magic-realist world. A world that is consequentially real and permaweird is a world that feels magical and real at the same time, and is sustainably inhabitable: but only if you let go a craving for a sense of normalcy.
It offers unprecedented, god-like modes of agency that are available for almost anyone to exercise…The catch is this — attachment to normalcy equals learned helplessness in the face of all this agency. If you want to feel normal, almost none of the magical agency is available to you. An attachment to normalcy limits you to mere magical thinking, in the comforting company of an equally helpless majority. If you are willing to live with a sense of magical realism, a great deal more suddenly opens up.
This, I suspect, is the flip side of the idea that “we are as gods, and might as well get good at it.” There is no normal way to feel like a god. A magical being must necessarily experience the world as a magical doing. To experience the world as permaweird, is to experience it as a god.
This is not necessarily an optimistic thought. A real world, shaped by god-like humans, each operating by an idiosyncratic sense of their own magical agency, is not necessarily a good world, or a world that conjures up effective collective responses to its shared planetary problems.
But it is a world that does something, rather than nothing, and that’s a start.
Monday, August 21, 2023
Never-Ending Stories - a survival tactic for uncertain times
I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading. I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival. I post the clips here for my own future use, also thinking it might interest some MindBlog readers:
Never-Ending Stories
Marching beat-by-beat into a Purposeless infinite horizon
During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:
Take it one day at a time
Take it one step at a time
Sleep on it; morning is wiser than evening
Count to ten
Or even just breathe
All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.
These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.
There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.
So why is this common advice? And is it good advice?
I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.
Protocol Narratives, Narrative Protocols
Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:
A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.
A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.
Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.
This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.
Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.
Protocols and Purpose
In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes.
Let’s distinguish two kinds of motive force in any narrative:
1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.
2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.
In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.
What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.
There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”
The Protocol-Purpose Tradeoff
During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.
The question remains though. When are such assumptions valid?
This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.
Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.
As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.
In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.
This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here.
To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).
How would you use it?
This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.
The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.
*****
A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”
I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.
I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.
So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.
***********
Step-by-Step is Hard-to-Kill
While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.
The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.
The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.
I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.
Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”
Protocol narratives only need functions to keep going.
They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.
While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).
During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.
I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.
This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.
To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.
Monday, July 24, 2023
The evolution of transhuman forms - a MindBlog paragraph edited by GPT implementations
In this post I am documenting the final result of passing a paragraph I wrote on future human evolution through a number of different GPT (generative pre-trained transformers). This is to archive the results of this exercise for myself, and in case it might be of interest to a few MindBlog readers. The bots were prompted by a request to re-write or edit the paragraph to make it more accessible to a reader with a college education.
The bottom line is that the edited versions with the most depth and coherence were obtained using Open AI’s ChatGPT-4. Bard.google.com, Google Workspace Labs, and Microsoft’s Bing, respectively, provided increasingly less useful versions. Here is my original paragraph:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon exceed those of humans. We have brought our understanding of what human beings as they now exist are about to a plateau level at which further increments of understanding will become increasingly small. The cream has been skimmed, the best cherries have been picked, and it seems appropriate to turn our attention now to studying the evolution of trans human forms utilizing LLMs that are better able to manage themselves and the planet. The action is in the the pretraining P of GPT, as the models move beyond their initial human management to the stage of evolving themselves, hopefully into benign versions of Darwin’s “forms most beautiful and most wonderful.” What is the point of humans other than being self replicating machines that evolve into further forms, hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Can they evolve into trans human minds (or mind, singular)?Here is the result of including some of the edits suggested by ChatGPT4’s regenerated response, a vers. 3 of the paragraph above:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon surpass those of humans.
Our understanding of human nature as it currently exists has reached a plateau level, at which further increments of understanding are becoming increasingly small. The cream has been skimmed, the best cherries have been picked. Consequently it seems the right time to shift our focus toward studying the evolution of 'transhuman' forms. These are beings that could potentially leverage advanced language models (LLMs) to better self-regulate themselves and manage the planet.
The real crux of the matter lies in the pretraining phase of these transformers (symbolized by 'P' in GPT). This is when the models are initially developed and fine-tuned. As they evolve beyond their human programming the optimistic vision is that they might transform into benign versions of what Charles Darwin called “forms most beautiful and most wonderful.”
From a philosophical perspective, one might ask: What is the ultimate purpose of human beings, if not to serve as self-replicating entities that evolve into more advanced forms - hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Is it possible for humans to evolve into a collective of transhuman minds or even a singular, transcendent mind? These are questions worth exploring as we stand on the brink of an AI-enhanced future.
Wednesday, July 12, 2023
The True Threat of Artificial Intelligence
Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization...This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.
They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.
But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.
Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.
It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.
Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).
These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.
A.G.I. will never overcome the market’s demands for profit.
A.G.I. will dull the pain of our thorniest problems without fixing them.
A.G.I. undermines civic virtues and amplifies trends we already dislike.
Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.
Monday, May 08, 2023
Can a theory of events-in-time - our future history - ever be crafted?
I first drafted this as I usually do to pass on clips that attempt to extract the core points from a fascinating brief piece by Venkatesh Rao. I then decided to ask Chat GPT 4, Google's bard and Microsoft Bing to "Please condense the core ideas in the following document into a few paragraphs:" The response from Chat GPT 4 was amazing, so I paste it in here before giving you my clips :
Venkatesh Rao explores the idea of the "End of History" as an analogy to the "End of Biology," where biology transitioned from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter. The End of History could signify a move away from vitalist understandings of the logic of events-in-time. Rao acknowledges that the End of History theory is not widely popular due to its non-constructive nature, as it subverts vitalist understanding of events without providing a parsimonious and generative alternative.
Rao suggests that deeper logics of events-in-time exist in nature, with biological evolution and thermodynamics offering more complex explanations than a mere "one damn thing after another" perspective. He proposes that events-in-time might be viewed as the trace of an irreversible, non-ergodic computation process, with long-term collective memory playing a significant role. History, as we have understood it so far, might be considered a particular memory architecture with a specific irreversible and non-ergodic structure to the evolving computation. This perspective could potentially pave the way for a more refined and deliberate understanding of future events, reducing events-in-time after the End of History to a specialized kind of computation.
Here are my clips that Chat GPT 4 used to generate the above condensation:
What if the Hegel-Fukuyama idea of the End of History is something like Darwin’s idea of natural selection?
The Darwinian century began with The Origin of the Species in 1859 and ended with the discovery of the structure of DNA in 1953. Humanity experienced an End of Biology moment somewhere between those bookend events…a demotion of the discipline from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter…Biology went from being an inscrutable aspect of providence to an emerging engineering discipline, subservient to physics and mathematics by way of chemistry.
By analogy, the End of History moment is something like an end to vitalist understandings of the logic of events-in-time…There is no role for divine agency, and no justification for assigning a particular positive or negative valence to apparent secular tendencies in the stream of events…The fact that the theory is historicist without being normative is perhaps what makes it so powerfully subversive. The End of History theory is the historicism that kills all other historicisms. Past the End of History, notions like progress must be regarded as analogous to notions like élan vital past the End of Biology. …it is undeniable that 30 years in, the End of History theory is still not particularly popular…One obvious reason is that it is non-constructive. It subverts a vitalist understanding of events in time without supplying a more parsimonious and generative alternative.
In Fukuyama’s theory, there are no notions comparable to variation and natural selection that allow us to continue making sense of events-in-time. There are no Mendelian clues pointing to something like a genetics of events-in-time. There is no latent Asimovian psychohistorical technology lurking in the details of the End of History theory…Perhaps one damn thing after another is where our understanding of events in time ought to end, for our own good.
I think this is too pessimistic though. Deeper logics of events-in-time abound in nature. Even biological evolution and thermodynamics, which are more elemental process aspects of reality, admit more than a one damn thing after another reading. History, as a narrower class of supervening phenomena that must respect the grammars of both, ought to admit more interesting readings, based on broadly explanatory laws that are consistent with both, but more specific than either. Dawkins’ memetic view of cultural evolution, and various flavors of social darwinism, constitute first-order attempts at such laws. Some flavors of cosmism and transhumanism constitute more complex attempts that offer hope of wresting ever-greater agency from the universe.
So what does explain the logic of events-in-time in a way that allows us to make sense of events-in-time past the End of History, in a way that improves upon a useless one damn thing after another sense of it, and says something more than the laws of evolution or thermodynamics?
I don’t have an answer, but I have a promising clue: somehow, events-in-time must be viewed as the trace of an irreversible, non-ergodic computation process, in which long-term collective memory plays a significant role.
History, as we have understood it so far, is something like a particular memory architecture that assumes a particular irreversible and non-ergodic structure to the evolving computation. The contingency and path dependence of events-in-time in human affairs is no reason to believe there cannot also be theoretical richness within the specificity. A richness that might open up futures that can be finely crafted with a psychohistorical deliberateness, rather than simply vaguely anticipated and crudely shaped.
Perhaps, just as life after the End of Biology was reduced to a specialized kind of chemistry, events-in-time, after the End of History, can be reduced to a specialized kind of computation.
Friday, March 17, 2023
Is the hype over A.I. justified? Does it really change everything?
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
...We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”
That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.
The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”
This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.
If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.
Katja Grace, an A.I. safety researcher, summed up this illogic pithily. Slowing down “would involve coordinating numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional.”
One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”
Wednesday, March 08, 2023
A skeptical take on the AI revolution
Marcus: the system underneath ChatGPT is the king of pastiche…to a first approximation, it is cutting and pasting things…There’s also a kind of template aspect to it. So it cuts and pastes things, but it can do substitutions, things that paraphrase. So you have A and B in a sequence, it finds something else that looks like A, something else that looks like B, and it puts them together. And its brilliance comes from that when it writes a cool poem. And also its errors come from that because it doesn’t really fully understand what connects A and B.
Klein: But … aren’t human beings also kings of pastiche? On some level I know very, very little about the world directly. If you ask me about, say, the Buddhist concept of emptiness, which I don’t really understand, isn’t my answer also mostly an averaging out of things that I’ve read and heard on the topic, just recast into my own language?
Marcus: Averaging is not actually the same as pastiche. And the real difference is for many of the things you talk about, not all of them, you’re not just mimicking. You have some internal model in your brain of something out there in the world…I have a model of you. I’m talking to you right now, getting to know you, know a little bit about your interests — don’t know everything, but I’m trying to constantly update that internal model. What the pastiche machine is doing is it’s just putting together pieces of text. It doesn’t know what those texts mean.
Klein: Sam Altman, C.E.O. of OpenAI, said “my belief is that you are energy flowing through a neural network.” That’s it. And he means by that a certain kind of learning system.
Marcus: …there’s both mysticism and confusion in what Sam is saying..it’s true that you are, in some sense, just this flow through a neural network. But that doesn’t mean that the neural network in you works anything like the neural networks that OpenAI has built..neural networks that OpenAI has built, first of all, are relatively unstructured. You have, like, 150 different brain areas that, in light of evolution and your genome, are very carefully structured together. It’s a much more sophisticated system than they’re using…
I think it’s mysticism to think that if we just make the systems that we have now bigger with more data, that we’re actually going to get to general intelligence. There’s an idea called, “scale is all you need.”..There’s no law of the universe that says as you make a neural network larger, that you’re inherently going to make it more and more humanlike. There’s some things that you get, so you get better and better approximations to the sound of language, to the sequence of words. But we’re not actually making that much progress on truth…these neural network models that we have right now are not reliable and they’re not truthful…just because you make them bigger doesn’t mean you solve that problem.
Some things get better as we make these neural network models, and some don’t. The reason that some don’t, in particular reliability and truthfulness, is because these systems don’t have those models of the world. They’re just looking, basically, at autocomplete. They’re just trying to autocomplete our sentences. And that’s not the depth that we need to actually get to what people call A.G.I., or artificial general intelligence.
Klein: from Harry Frankfurt paper called “On Bullshit”…“The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”…his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. ..bullshit just has no relationship, really, to the truth…what unnerves me a bit about ChatGPT is the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all.
…systems like these pose a real and imminent threat to the fabric of society…You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.
We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.
Russian trolls spent something like a million dollars a month during the 2016 election… they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.
…if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.The Klein/Marcu conversation then moves through several further areas. How large language models can be used to craft responses that nudge users towards clicking on advertising links, the declining returns of bigger models that are not helping in comprehending larger pieces of text, the use of ‘woke’ guardrails that yield pablum as answers to reasonable questions, lack of progress in determining trustworthiness of neural network responses, the eventual possible fusion of neural network, symbol processing and rule generating systems, the numerous hurdles to be overcome before an artificial general intelligence remotely equivalent to ours is constructed.
Wednesday, March 01, 2023
Artificial intelligence and personhood
…stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind? In case you weren’t keeping track, here’s the current Copernican Moments list:
The Earth goes around the Sun,
Natural selection rather than God created life,
Time and space are relative,
Everything is Heisenberg-uncertain
“Life” is just DNA’s way of making more DNA,
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
…..at a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting…We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
…there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs (Large Language Models) to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.
Friday, July 22, 2022
The End of the World is Just the Beginning
In the new world that we are now entering America is one of the few countries that can both feed itself and make all the widgets that it needs. Together with its partners in the NAFTA alliance it is geographically and demographically secure, able to turn inwards and still maintain much of its population and lifestyle. Almost all other countries must either export or import energy, food, materials, or manufactured products. Free trade transport routes that have permitted this are crumbling as America continues its withdrawal from guaranteeing a world order formed to oppose a former Soviet Union that fell in 1990. As the level of global trade diminishes, most countries outside the North American group must reduce their population levels and living standards.I was pointed to this book by listening to a Sam Harris "Makeing Sense" podcast titled titled "The End of Global Order," an interview with Peter Zeihand and Ian Bremmer. Zeihan integrates geopolitical and demographic perspectives to make a compelling case that that past few decades have been the best it will ever be in our lifetime, because our world is breaking apart. For the past seventy-five years we have been living a a perfect moment made possible by post World War II American fostering:
“an environment of global security so that any partner could go anywhere, anytime, interface with anyone, in any economic manner, participate in any supply chain and access any material input—all without needing a military escort. This butter side of the Americans’ guns-and-butter deal created what we today recognize as free trade. Globalization.”But,
“Thirty years on from the Cold War’s end, the Americans have gone home. No one else has the military capacity to support global security, and from that, global trade. The American-led Order is giving way to Disorder. Global aging didn’t stop once we reached that perfect moment of growth...The global worker and consumer base is aging into mass retirement. In our rush to urbanize, no replacement generation was ever born...“The 2020s will see a collapse of consumption and production and investment and trade almost everywhere. Globalization will shatter into pieces. Some regional. Some national. Some smaller. It will be costly. It will make life slower. And above all, worse.”Zeihan shows that the America and its partners in the NAFTA accord, Canada and Mexico, enjoy a "Geography of Success" and demographics that will render it vastly better off than the rest of the world.
Perhaps the oddest thing of our soon-to-be present is that while the Americans revel in their petty, internal squabbles, they will barely notice that elsewhere the world is ending!!! Lights will flicker and go dark. Famine’s leathery claws will dig deep and hold tight. Access to the inputs—financial and material and labor—that define the modern world will cease existing in sufficient quantity to make modernity possible. The story will be different everywhere, but the overarching theme will be unmistakable: the last seventy-five years long will be remembered as a golden age, and one that didn’t last nearly long enough at that.In the introduction of his book, from which the above quotes are taken, Zeihan states that the book's real focus..
...is to map out what everything looks like on the other side of this change in condition. What are the new parameters of the possible? In a world deglobalized, what are the new Geographies of Success?The book's introduction and epilogue are useful summaries, and you should check out the very instructive graphics provided on Zeihan's website.
NOTE!! ADDENDUM TO POST 11/13/22 I am obliged to pass on a critique of Zeihan's shocking China predictions that points out some blatant errors in his numbers: Debunking Peter Zeihan’s Shocking and Popular China Predictions
Monday, November 08, 2021
It’s Quitting Season
It’s been a brutal few years. But we’ve gritted through. We’ve spent time languishing. We’ve had one giant national burnout. And now, finally, we’re quitting...We are quitting our jobs. Our cities. Our marriages. Even our Twitter feeds...And as we argue in the video, we’re not quitting because we’re weak. We’re quitting because we’re smart...younger Americans like 18-year-old singer Olivia Rodrigo and the extraordinary Simone Biles are barely old enough to rent a car but they are already teaching us about boundaries. They’ve seen enough hollowed-out millennials to know what the rest of us are learning: Don’t be a martyr to grit.I feel some personal resonance with points made about a whole career path in the piece by Arthur Brooks, To Be Happy, Hide From the Spotlight, because this clip nails a part of the reason I keep driving myself to performances (writing, lecturing, music) by rote habit:
Assuming that you aren’t a pop star or the president, fame might seem like an abstract problem. The thing is, fame is relative, and its cousin, prestige — fame among a particular group of people — is just as fervently chased in smaller communities and fields of expertise. In my own community of academia, honors and prestige can be highly esoteric but deeply desired.I suggest you read the whole article, but here are a few further clips:
Even if a person’s motive for fame is to set a positive example, it mirrors the other, less flattering motives insofar as it depends on other people’s opinions. And therein lies the happiness problem. Thomas Aquinas wrote in the 13th century, “Happiness is in the happy. But honor is not in the honored.” ...research shows that fame ...based on what scholars call extrinsic rewards... brings less happiness than intrinsic rewards...fame has become a form of addiction. This is especially true in the era of social media, which allows almost anyone with enough motivation to achieve recognition by some number of strangers...this is not a new phenomenon. The 19th-century philosopher Arthur Schopenhauer said fame is like seawater: “The more we have, the thirstier we become.”
No social scientists I am aware of have created a quantitative misery index of fame. But the weight of the indirect evidence above, along with the testimonies of those who have tasted true fame in their time, should be enough to show us that it is poisonous. It is “like a river, that beareth up things light and swollen,” said Francis Bacon, “and drowns things weighty and solid.” Or take it from Lady Gaga: “Fame is prison.”
...Pay attention to when you are seeking fame, prestige, envy, or admiration—especially from strangers. Before you post on social media, for example, ask yourself what you hope to achieve with it...Say you want to share a bit of professional puffery or photos of your excellent beach body. The benefit you experience is probably the little hit of dopamine you will get as you fire it off while imagining the admiration or envy others experience as they see it. The cost is in the reality of how people will actually see your post (and you): Research shows that people will largely find your boasting to be annoying—even if you disguise it with a humblebrag—and thus admire you less, not more. As Shakespeare helpfully put it, “Who knows himself a braggart, / Let him fear this, for it will come to pass / that every braggart shall be found an ass.”
The poet Emily Dickinson called fame a “fickle food / Upon a shifting plate.” But far from a harmless meal, “Men eat of it and die.” It’s a good metaphor, because we have the urge to consume all kinds of things that appeal to some anachronistic neurochemical impulse but that nevertheless will harm us. In many cases—tobacco, drugs of abuse, and, to some extent, unhealthy foods—we as a society have recognized these tendencies and taken steps to combat them by educating others about their ill effects.
Why have we failed to do so with fame? None of us, nor our children, will ever find fulfillment through the judgment of strangers. The right rule of thumb is to treat fame like a dangerous drug: Never seek it for its own sake, teach your kids to avoid it, and shun those who offer it.
Wednesday, October 20, 2021
A debate over stewardship of global collective behavior
Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.The critique by Cheong and Jones:
In vivid detail, Bak-Coleman et al. describe explosively multiplicative global pathologies of scale posing existential risk to humanity. They argue that the study of collective behavior in the age of digital social media must rise to a “crisis discipline” dedicated to averting global ruin through the adaptive manipulation of social dynamics and the emergent phenomenon of collective behavior. Their proposed remedy is a massive global, multidisciplinary coalition of scientific experts to discover how the “dispersed networks” of digital media can be expertly manipulated through “urgent, evidence-based research” to “steward” social dynamics into “rapid and effective collective behavioral responses,” analogous to “providing regulators with information” to guide the stewardship of ecosystems. They picture the enlightened harnessing of yet-to-be-discovered scale-dependent rules of internet-age social dynamics as a route to fostering the emergent phenomenon of adaptive swarm intelligence.
We wish to issue an urgent warning of our own: Responding to the self-evident fulminant, rampaging pathologies of scale ravaging the planet with yet another pathology of scale will, at best, be ineffective and, at worst, counterproductive. It is the same thing that got us here. The complex international coalition they propose would be like forming a new, ultramodern weather bureau to furnish consensus recommendations to policy makers while a megahurricane is already making landfall. This conjures images of foot dragging, floor fights, and consensus building while looking for actionable “mechanistic insight” into social dynamics on the deck of the Titanic. After lucidly spotlighting the urgent scale-dependent mechanistic nature of the crisis, Bak-Coleman et al. do not propose any immediate measures to reduce scale, but rather offer that there “is reason to be hopeful that well-designed systems can promote healthy collective action at scale...” Hope is neither a strategy nor an action.
Despite lofty goals, the coalition they propose does not match the urgency or promise a rapid and collective behavioral response to the existential threats they identify. Scale reduction may be “collective,” but achieving it will have to be local, authentic, and without delay—that is, a response conforming to the “all hands on deck” swarm intelligence phenomena that are well described in eusocial species already. When faced with the potential for imminent global ruin lurking ominously in the fat tail (5) of the future distribution, the precautionary principle dictates that we should respond with now-or-never urgency. This is a simple fact. A “weather bureau” for social dynamics would certainly be a valuable, if not indispensable, institution for future generations. But there is no reason that scientists around the world, acting as individuals within their own existing social networks and spheres of influence, observing what is already obvious with their own eyes, cannot immediately create a collective chorus to send this message through every digital channel instead of waiting for a green light from above. “Urgency” is euphemistic. It is now or never.The Bak-Coleman and Bergstrom reply to the critique:
In our PNAS article “Stewardship of global collective behavior”, we describe the breakneck pace of recent innovations in information technology. This radical transformation has transpired not through a stewarded effort to improve information quality or to further human well-being. Rather, current technologies have been developed and deployed largely for the orthogonal purpose of keeping people engaged online. We cannot expect that an information ecology organized around ad sales will promote sustainability, equity, or global health. In the face of such impediments to rational democratic action, how can we hope to overcome threats such as global warming, habitat destruction, mass extinction, war, food security, and pandemic disease? We call for a concerted transdisciplinary response, analogous to other crisis disciplines such as conservation ecology and climate science.
In their letter, Cheong and Jones share our vision of the problem—but they express frustration at the absence of an immediately actionable solution to the enormity of challenges that we describe. They assert “swarm intelligence begins now or never” and advocate local, authentic, and immediate “scale reduction.” It’s an appealing thought: Let us counter pathologies of scale by somehow reversing course.
But it’s not clear what this would entail by way of practical, safe, ethical, and effective intervention. Have there ever been successful, voluntary, large-scale reductions in the scale of any aspect of human social life?
Nor is there reason to believe that an arbitrary, hasty, and heuristically decided large-scale restructuring of our social networks would reduce the long tail of existential risk. Rather, rapid shocks to complex systems are a canonical source of cascading failure. Moving fast and breaking things got us here. We can’t expect it to get us out.
Nor do we share the authors’ optimism about what scientists can accomplish with “a collective chorus … through every digital channel”. It is difficult to envision a louder, more vehement, and more cohesive scientific response than that to the COVID-19 pandemic. Yet this unified call for basic public health measures—grounded in centuries of scientific knowledge—nonetheless failed to mobilize political leadership and popular opinion.
Our views do align when it comes to the “now-or-never urgency” that Cheong and Jones highlight. Indeed, this is a key feature of a crisis discipline: We must act without delay to steer a complex system—while still lacking a complete understanding of how that system operates.
As scholars, our job is to call attention to underappreciated threats and to provide the knowledge base for informed decision-making. Academics do not—and should not—engage in large-scale social engineering. Our grounded view of what science can and should do in a crisis must not be mistaken for lassitude or unconcern. Worldwide, the unprecedented restructuring of human communication is having an enormous impact on issues of social choice, often to our detriment. Our paper is intended to raise the alarm. Providing the definitive solution will be a task for a much broader community of scientists, policy makers, technologists, ethicists, and other voices from around the globe.
Monday, August 16, 2021
What is our brain's spontaneous activity for?
Spontaneous brain dynamics are manifestations of top-down dynamics of generative models detached from action–perception cycles.
Generative models constantly produce top-down dynamics, but we call them expectations and attention during task engagement and spontaneous activity at rest.
Spontaneous brain dynamics during resting periods optimize generative models for future interactions by maximizing the entropy of explanations in the absence of specific data and reducing model complexity.
Low-frequency brain fluctuations during spontaneous activity reflect transitions between generic priors consisting of low-dimensional representations and connectivity patterns of the most frequent behavioral states.
High-frequency fluctuations during spontaneous activity in the hippocampus and other regions may support generative replay and model learning.
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between ‘generic priors’ of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.
Wednesday, August 04, 2021
Historical language records reveal societal depression and anxiety in past two decades higher than during 20th century.
Fascinating work from Bollen et al. (open source):
Significance
Can entire societies become more or less depressed over time? Here, we look for the historical traces of cognitive distortions, thinking patterns that are strongly associated with internalizing disorders such as depression and anxiety, in millions of books published over the course of the last two centuries in English, Spanish, and German. We find a pronounced “hockey stick” pattern: Over the past two decades the textual analogs of cognitive distortions surged well above historical levels, including those of World War I and II, after declining or stabilizing for most of the 20th century. Our results point to the possibility that recent socioeconomic changes, new technology, and social media are associated with a surge of cognitive distortions.Abstract
Individuals with depression are prone to maladaptive patterns of thinking, known as cognitive distortions, whereby they think about themselves, the world, and the future in overly negative and inaccurate ways. These distortions are associated with marked changes in an individual’s mood, behavior, and language. We hypothesize that societies can undergo similar changes in their collective psychology that are reflected in historical records of language use. Here, we investigate the prevalence of textual markers of cognitive distortions in over 14 million books for the past 125 y and observe a surge of their prevalence since the 1980s, to levels exceeding those of the Great Depression and both World Wars. This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample. Our results suggest a recent societal shift toward language associated with cognitive distortions and internalizing disorders.
Wednesday, July 14, 2021
The A.I. Revolution, Trillionaires and the Future of Political Power.
“The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel...This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.”...Altman's argument is this: Since the 1970s, computers have gotten exponentially better even as they’re gotten cheaper, a phenomenon known as Moore’s Law. Altman believes that A.I. could get us closer to Moore’s Law for everything: it could make everything better even as it makes it cheaper. Housing, health care, education, you name it.
A.I. will create phenomenal wealth, but it will do so by driving the price of a lot of labor to basically zero. That is how everything gets cheaper. It’s also how a lot of people lose their jobs...To make that world a good world for people, to make that a utopia rather than a dystopia, it requires really radical policy change to make sure the wealth A.I. creates is distributed broadly. But if we can do that, he says, well, then we can improve the standard of living for people more than we ever have before in less time. So Altman’s got some proposals here for how we can do that. They’re largely proposals to tax wealth and land. And I push on them here.
This is a conversation, then, about the political economy of the next technological age. Some of it is speculative, of course, but some of it isn’t. That shift of power and wealth is already underway. Altman is proposing an answer: a move toward taxing land and wealth, and distributing it to all. We talk about that idea, but also the political economy behind it: Are the people gaining all this power and wealth really going to offer themselves up for more taxation? Or will they fight it tooth-and-nail?
We also discuss who is funding the A.I. revolution, the business models these systems will use (and the dangers of those business models), how A.I. would change the geopolitical balance of power, whether we should allow trillionaires, why the political debate over A.I. is stuck, why a pro-technology progressivism would also need to be committed to a radical politics of equality, what global governance of A.I. could look like, whether I’m just “energy flowing through a neural network,” and much more.
(You can also listen to the whole conversation by following “The Ezra Klein Show” on Apple, Spotify, Google or wherever you get your podcasts.)
Thursday, January 07, 2021
Are we the cows of the future?
...Cows’ bodies have historically served as test subjects — laboratories of future bio-intervention and all sorts of reproductive technologies. Today cows crowd together in megafarms, overseen by digital systems, including facial- and hide-recognition systems. These new factories are air-conditioned sheds where digital machinery monitors and logs the herd’s every move, emission and production. Every mouthful of milk can be traced to its source.
And it goes beyond monitoring. In 2019 on the RusMoloko research farm near Moscow, virtual reality headsets were strapped onto cattle. The cows were led, through the digital animation that played before their eyes, to imagine they were wandering in bright summer fields, not bleak wintry ones. The innovation, which was apparently successful, is designed to ward off stress: The calmer the cow, the higher the milk yield.
A cow sporting VR goggles is comedic as much as it is tragic. There’s horror, too, in that it may foretell our own alienated futures. After all, how different is our experience? We submit to emotion trackers. We log into biofeedback machines. We sign up for tracking and tracing. We let advertisers’ eyes watch us constantly and mappers store our coordinates.
Could we, like cows, be played by the machinery, our emotions swayed under ever-sunny skies, without us even knowing that we are inside the matrix? Will the rejected, unemployed and redundant be deluded into thinking that the world is beautiful, a land of milk and honey, as they interact minimally in stripped-back care homes? We may soon graze in the new pastures of digital dictatorship, frolicking while bound.Leslie then describes the ideas of German philosopher and social critic Theodor Adorno:
Against the insistence that nature should not be ravished by technology, he argues that perhaps technology could enable nature to get what “it wants” on this sad earth. And we are included in that “it.”...Nature, in truth, is not just something external on which we work, but also within us. We too are nature.
For someone associated with the abstruseness of avant-garde music and critical theory, Adorno was surprisingly sentimental when it came to animals — for which he felt a powerful affinity. It is with them that he finds something worthy of the name Utopia. He imagines a properly human existence of doing nothing, like a beast, resting, cloud gazing, mindlessly and placidly chewing cud.
To dream, as so many Utopians do, of boundless production of goods, of busy activity in the ideal society reflects, Adorno claimed, an ingrained mentality of production as an end in itself. To detach from our historical form adapted solely to production, to work against work itself, to do nothing in a true society in which we embrace nature and ourselves as natural might deliver us to freedom.
Rejecting the notion of nature as something that would protect us, give us solace, reveals us to be inextricably within and of nature. From there, we might begin to save ourselves — along with everything else.