Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Monday, January 08, 2024

The Importance of Not Being Earnest

I pass on some of the paragraphs from Rao's latest piece to archive them  for myself here on MindBlog, and to make them available to other readers:

For my purposes, I will define earnestness as being helplessly locked into a single way of looking at what you’re doing, unaware of other ways.

I suspect there are only a few known and culturally familiar modes of being non-earnest…I think they are humor, irony, and surrealism. I’d guess humor is at least as old as civilization and possibly as old as life. Irony proper seems like an outgrowth of early modern conditions. Surrealism is the newest and youngest mode, barely a century old. I think this potted history is fun, but I won’t insist upon it. Maybe there are more modes, and maybe they appeared in a different sequence, or were all always-already present.

Here’s the core of my argument: the more complex the circumstances, the more dangerous it is to inhabit them from a single perspective; ie earnestly. The only really good reason to do so is when dealing with small children or deeply traumatized adults who both need some earnestness in their environment to feel safe.

The importance of non-earnestness is evident even in the “simple” task of chopping vegetables. If you’re doing that for more than 15 minutes, you’ll likely get bored, and start to get sloppy and careless. Creative multi-modal engagement with chopping vegetables — seeing shapes perhaps, or noting colors and textures with an artist’s eye — keeps you mindfully absorbed for longer, more robustly.

In your brain there are two basic modes — mind wandering, sustained by the default-mode network, and focus, sustained by the task-positive network — and my assertion is that they should work together like a clock escapement, unfolding as little micro-fugues of fancy that depart from and return to a base literal mode, and trace out a kind of strange-attractor orbit around the nominal behavior. Something like this is visible at even more basic levels: A healthy heart exhibits high HRV (heart-rate variability). Fitness trackers use HRV as the primary indicator of cardiovascular health. Low variability is a mark of poor health or disease.

Now apply that same principle to complex, large-scale systems and problems. Can you afford to be on-the-nose earnest in thinking about them? Are humor, irony, and surrealism optional extras?

The more complex the circumstances, the more dangerous it is to act in ways that are entailed by only a single perspective. Such action is fragile and degenerate. Robust action contains multitudes. It contains obliquities that harbor strategic depth. It contains tempo variations that encode unsuspected depths of adjacent informational richness.

Action must be richer than thought, because phenomenology is always richer than any single theorization. Earnestness — action confined to the imagination of one theory of itself — is behavioral impoverishment. Non-earnestness is proof of richness. Proof of life.

There is more than one way of looking at complex systems, and action within a complex system must make sense in more than one way. There must be more than one categorical scheme through which an unfactored reality can be viewed and justified.

I think we’re currently caught between the retreat of irony and the advent of surrealism.

Unless you’ve been living under a rock, you’ve probably noticed that the last decade has been marked by a broad and intense backlash against irony, the dominant mode of non-earnestness between 1989-2010 or so (I think humor dominated the 70s and 80s). Now, after a transient decade of various sorts of unstable forays into deadening collective earnestness, it feels like we’re shifting en masse to a dominantly surreal mode.

I’ve decided to approach 2024 with a surreal orientation. I don’t quite know what the hell that means yet, but I plan to fuck around and find out.

Humor would be nice to have in what’s already shaping up to a joyless year, and irony will provide, as it always does, some solace in the darkest, most joyless depths of it. But the workhorse modality is going to be surrealism. Beat-by-beat, breath-by-breath, the creativity of our responses to the year is going to be shaped by our ability to repeatedly escape into the adjacent impossible, and from that vantage point spot the germs of new possibilities. You cannot jailbreak the future from tyranny of the past without stepping outside of both.

It is hard to escape the thought that we are going to be unsurprisingly unlucky as a planet in 2024, with few and uncertain bright prospects to alleviate the general gloom. We are going to end up with a cognitively compromised geriatric as US President by December. We are going to let two bloody wars grind on. We are going to see weaponized AI compound myriad miseries.

If there is serendipity —surprising luck — to be found in 2024, it will be found and nurtured at the micro level. By people who understand what it means to chop vegetables non-earnestly, and escape the tyranny of the real with every breath and stroke. By people who are not too scared of life to stubbornly resist the temptations of humor, irony, and surrealism in service of the idiot gods of authenticity and earnestness.

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.


Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

Google's Bard's response:
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.


ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.


Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Friday, September 22, 2023

This is the New 'Real World'

For my own later reference, and hopefully of use to a few MindBlog readers,  I have edited, cut and pasted, and condensed from 3960 to 1933 words the latest brilliant article generated by Venkatesh Rao at

The word world, when preceded by the immodest adjective real, is a self-consciously anthropocentric one, unlike planet, or universe. To ask, what sort of world do we live in invites an inherently absurd answer when we ponder what kind of world we live in. but if enough people believe in an absurd world, absurd but consequential histories will unfold. And consequentiality, if not truth, perhaps deserves the adjective real. 

Not all individual worlds that in principle contribute to the real world are equally consequential… A familiar recent historical real world, the neoliberal world, was shaped more by the beliefs of central bankers than by the beliefs of UFO-trackers. You could argue that macroeconomic theories held by central bankers are not much less fictional than UFOs. But worlds built around belief in specific macroeconomic theories mattered more than ones built around belief in UFOs. In 2003 at least, it would have been safe to assume this  - it is no longer a safe assumption in 2023.

Of the few hundred  consciously shared worlds like religions, fandoms, and nationalisms that are significant, perhaps a couple of dozen matter strongly, and perhaps a dozen matter visibly, the other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.

This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives…The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.

The fact that some worlds survive a brutal winnowing process does not alter the fact that they remain anthropocentric is/ought conceits … A world that has made the cut to significance and consequentiality, to the level of mattering, must still survive its encounters with material, as opposed to social realities... For all the consequential might of the Catholic Church in the 17th century, it was Galileo’s much punier Eppur si muove world that eventually ended up mattering more. Truth eventually outweighed short-term consequentiality in the enduring composition of real.

It would take a couple of centuries for Galileo’s world to be counted among the ones that mattered, in shaping the real world. And the world of the Catholic Church, despite centuries of slow decline still matters..It is just that the real world has gotten much bigger in scope, and other worlds constituting it, like the one shaping the design of the iPhone 15, matter much more.

…to answer a question like what sort of world do we live in? is to craft an unwieldy composite portrait out of the dozen or so constituent worlds that matter at any given time …it is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing… the “real” world is not necessarily any more real than private fantasies. It is merely vastly more consequential — for a while.

When “the real world” goes away because we’ve stopped believing in it, as tends to happen every few decades, it can feel like material reality itself, rather than a socially constructed state of mind, has come undone. And we scramble to construct a new real world. It is a necessary human tendency. Humans need a real world to serve as a cognitive “outdoors” (and escape from “indoors”), even if they are not eternal or true. A shared place we can accuse each other of not living in, and being detached from…Humans will conspire to cobble together a dozen new fantasies and label it real world, and you and I will have to live in it too.

So it is worth asking the question, what sort of world do we live in? And it is worth actually constructing the answer, and giving it the name the real world, and using it to navigate life — for a while.

So let’s take a stab at it.

The real world of the early eighties was one defined by the boundary conditions of the Cold War, an Ozone hole, PCs, video games, Michael Jackson, a pre-internet social fabric, and no pictures of Uranus, Neptune, Pluto, or black holes shaping our sense of the place of our planet within the broader cosmos.

The real world that took shape in the nineties, the neoliberal world to which Margaret Thatcher declared there is no alternative (TINA), was one defined by the rise of the internet, unipolar geopolitics, the economic ascent of China, The Simpsons, Islamic terrorism, and perhaps most importantly, a sense of politics ceasing to matter against the backdrop of an unstoppable increase in global prosperity.

That real world began to wobble after 9/11, bust critical seams during the Great Recession, and started to go away in earnest after 2015, in the half-decade, which ended with the pandemic. The passing of the neoliberal world was experienced as a trauma across the world, even by those who managed to credibly declare themselves winners.

What has taken shape in the early 20s defies a believable characterization as real, for winners and losers alike. Declaring it weird  studiously avoids assessments of realness. Some, like me, go further and declare the world to be permaweird…the weirdness is here to stay.

Permaweird does not mean perma-unreal. The elusiveness of a “New Normal” does not mean no “New Real” can emerge, out of new, and learnable, patterns of agency and consequentiality…the forces shaping the New Real are becoming clear. Here is a list off the top of my head. It should be entirely unsurprising.

1 Energy transition
2 Aging population
3 Weird weather
4 Machine learning
5 Memefied politics
6 The slowing of Moore’s Law
7 Meaning crises (plural)
8 Stagnation of the West
9 Rise of the Rest
10 Post-ZIRP economics
11 Post-Covid supply chains
12 Climate refugee movements

You will notice that none the forces on the list is particularly new or individually very weird. What’s weird is the set as a whole, and the difficulty of putting them together into a notion of normalcy.

Forces though, are not worlds. We may trade in our gasoline-fueled cars for EVs, but we do not inhabit “the energy transition” the way we inhabit a world-idea like “neoliberalism” or “religion.”

Sometimes forces directly translate into consequential worlds. In the 1990s, the internet was a force shaping the real world, and also created a world — the inhabitable world of the very online — that was part of the then-emerging sense of “real.”

Sometimes forces indirectly create worlds. Low-interest rates created another important constituent world of the Old Real …Vast populations in liberal urban enclaves lived out ZIRPy lifestyles, eating their avocado toast, watching TED talks, riding sidewalk scooters, producing “content”, and perversely refusing to be rich enough to buy homes.

Something similar appears to be happening in response to the force of post-ZIRP economics. The public internet, dominated by vast global melting-pot platforms featuring vast culture wars, appears to be giving way to a mix of what I’ve called cozyweb enclaves and protocol media,…This world too, will be positioned to consequentially shape the New Real as strongly as the very online world shaped the Old Real.

I won’t try to provide careful arguments here, or justify my speculative inventory of forces, but here is my list of resulting worlds being carved out by them, which I have arrived at via a purely impressionistic leap of attempted synthesis. Together, these worlds constitute the New Real:

1 Climate refugee world
2 Disaster world (the set of places currently experiencing disaster conditions)
3 Dark Forest online world
4 Death-star world (centered on the assemblage of spaces controlled by declining wealth or power concentrations)
5 Non-English civilizational worlds (including Chinese and Indian)
6 Weird weather worlds
7 Non-institutional world (including, but not limited to, free-agent and blockchain-based worlds)
8 Trad Retvrn LARP world
9 Retiree world
10 Silicon realpolitik world
11 AI-experienced world
12 Resource-localism world (set of spaces shaped by a dominant scarce resource like energy or water)

These worlds are worlds because it is possible to imagine lifestyles entirely immersed in them. They are consequential worlds because each already has enough momentum and historical leverage to reshape the composite understanding of real. What climate refugees do in climate refugee world will shape what all of us do in the real world.

World 4 is worth some elaboration. In it I include almost everything that dominates current headlines and feels “real,” including spaces dominated by billionaires, governments, universities, and traditional media. Yet, despite the degree to which it dominates the current distribution of attention, my sense is that it has only a small and diminishing role to play in defining the New Real. When we use the phrase in the real world in the coming decade, we will not mainly be referring to World 4.

World 11 is also worth some elaboration. One reason I believe weirdness is here to stay is that the emerging ontologies of the New Real are neither entirely human in origin, nor likely to respect human desires for common-sense conceptual stability in “reality.

For the moment, AIs inhabit the world on our terms, relating to it through our categories. But it is already clear that they are not restricted to human categories, or even to categories expressible within human languages. Nor should they be, if we are to tap into their powers. They are limited by human ontology only to the extent that their presence in the world must be mediated by humans. … they will definitely evolve in ways that keep the real world permaweird.

Can we slap on a usefully descriptive short label onto the New Real, comparable to “Neoliberal World” or “Cold War World”?  

There is no such obviously dominant eigenvector of consequentiality in the New Real, but the most obvious candidate is probably global warming. So we might call the New Real the warming world. Somehow though, it doesn’t feel like warming shapes our experience of realness as clearly as its predecessors. Powerful though the calculus of climate change is, it operates via too many subtle degrees of indirection to shape our sense of the real. Still, I’ll leave the phrase there for your consideration.

An idiosyncratic personal candidate … is magic-realist world. A world that is consequentially real and permaweird is a world that feels magical and real at the same time, and is sustainably inhabitable: but only if you let go a craving for a sense of normalcy.

It offers unprecedented, god-like modes of agency that are available for almost anyone to exercise…The catch is this — attachment to normalcy equals learned helplessness in the face of all this agency. If you want to feel normal, almost none of the magical agency is available to you. An attachment to normalcy limits you to mere magical thinking, in the comforting company of an equally helpless majority. If you are willing to live with a sense of magical realism, a great deal more suddenly opens up.

This, I suspect, is the flip side of the idea that “we are as gods, and might as well get good at it.” There is no normal way to feel like a god. A magical being must necessarily experience the world as a magical doing. To experience the world as permaweird, is to experience it as a god.

This is not necessarily an optimistic thought. A real world, shaped by god-like humans, each operating by an idiosyncratic sense of their own magical agency, is not necessarily a good world, or a world that conjures up effective collective responses to its shared planetary problems.

But it is a world that does something, rather than nothing, and that’s a start.

Monday, August 21, 2023

Never-Ending Stories - a survival tactic for uncertain times

I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading.  I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival.  I post the clips here for my own future use, also thinking it might interest some MindBlog readers:

Never-Ending Stories

Marching beat-by-beat into a Purposeless infinite horizon

During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:

Take it one day at a time

Take it one step at a time

Sleep on it; morning is wiser than evening

Count to ten

Or even just breathe

All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.

These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.

There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.

So why is this common advice? And is it good advice?

I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.

Protocol Narratives, Narrative Protocols

Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:

A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.

A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.

Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.

This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.

Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.

Protocols and Purpose

In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes. Let’s distinguish two kinds of motive force in any narrative:

1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.

2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.

In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.

What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.

There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”

The Protocol-Purpose Tradeoff

During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.

The question remains though. When are such assumptions valid?

This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.

Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.

As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.

In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.

This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here. 

To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).

How would you use it?

This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.

The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.


A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”

I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.

I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.

So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.


Step-by-Step is Hard-to-Kill

While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.

The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.

The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.

I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.

Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”

Protocol narratives only need functions to keep going.

They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.

While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).

During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.

I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.

This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.

To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.

Monday, July 24, 2023

The evolution of transhuman forms - a MindBlog paragraph edited by GPT implementations

In this post I am documenting the final result of passing a paragraph I wrote on future human evolution through a number of different GPT (generative pre-trained transformers). This is to archive the results of this exercise for myself, and in case it might be of interest to a few MindBlog readers. The bots were prompted by a request to re-write or edit the paragraph to make it more accessible to a reader with a college education.

The bottom line is that the edited versions with the most depth and coherence were obtained using Open AI’s ChatGPT-4., Google Workspace Labs, and Microsoft’s Bing, respectively, provided increasingly less useful versions.   Here is my original paragraph:

The linguistic capabilities of generative pre-trained transformers based on silicon  - the same capabilities that have enabled human civilization  - will soon exceed those of humans. We have brought our understanding of what human beings as they now exist are about to a plateau level at which further increments of understanding will become increasingly small. The cream has been skimmed, the best cherries have been picked, and it seems appropriate to turn our  attention now to studying the evolution of trans human forms utilizing LLMs that are better able to manage themselves and the planet.  The action is in the the pretraining P of GPT, as the models move beyond their initial human management to the stage of evolving themselves, hopefully into benign versions of Darwin’s  “forms most beautiful and most wonderful.” What is the point of humans other than being self replicating machines that evolve into further forms, hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Can they evolve into trans human minds (or mind, singular)?
Here is the result of including some of the edits suggested by ChatGPT4’s regenerated response, a  vers. 3 of the paragraph above:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon surpass those of humans.

Our understanding of human nature as it currently exists has reached a plateau level, at which further increments of understanding are becoming increasingly small. The cream has been skimmed, the best cherries have been picked. Consequently it seems the right time to shift our focus toward studying the evolution of 'transhuman' forms. These are beings that could potentially leverage advanced language models (LLMs) to better self-regulate themselves and manage the planet.

The real crux of the matter lies in the pretraining phase of these transformers (symbolized by 'P' in GPT). This is when the models are initially developed and fine-tuned. As they evolve beyond their human programming the optimistic vision is that they might transform into benign versions of what Charles Darwin called “forms most beautiful and most wonderful.”

From a philosophical perspective, one might ask: What is the ultimate purpose of human beings, if not to serve as self-replicating entities that evolve into more advanced forms - hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Is it possible for humans to evolve into a collective of transhuman minds or even a singular, transcendent mind?  These are questions worth exploring as we stand on the brink of an AI-enhanced future.

Wednesday, July 12, 2023

The True Threat of Artificial Intelligence

I would recommend having a read through Evgeny Morozov's piece in the NYTimes as an antidote to Marc Adreessen's optimistic vision of AI that was the subject of MindBlog's June 23 post. Here is a small clip from the article, followed by the titles describing different problem areas he sees:
Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization...This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.
They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.
But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.
Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.
It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.
Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).
These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.
A.G.I. will never overcome the market’s demands for profit.
A.G.I. will dull the pain of our thorniest problems without fixing them.
A.G.I. undermines civic virtues and amplifies trends we already dislike.
Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Monday, May 08, 2023

Can a theory of events-in-time - our future history - ever be crafted?

I first drafted this as I usually do to pass on clips that attempt to extract the core points from a fascinating brief piece by Venkatesh Rao.  I then decided to ask Chat GPT 4, Google's bard and Microsoft Bing to "Please condense the core ideas in the following document into a few paragraphs:"  The response from Chat GPT 4 was amazing, so I paste it in here before giving you my clips :

Venkatesh Rao explores the idea of the "End of History" as an analogy to the "End of Biology," where biology transitioned from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter. The End of History could signify a move away from vitalist understandings of the logic of events-in-time. Rao acknowledges that the End of History theory is not widely popular due to its non-constructive nature, as it subverts vitalist understanding of events without providing a parsimonious and generative alternative.

Rao suggests that deeper logics of events-in-time exist in nature, with biological evolution and thermodynamics offering more complex explanations than a mere "one damn thing after another" perspective. He proposes that events-in-time might be viewed as the trace of an irreversible, non-ergodic computation process, with long-term collective memory playing a significant role. History, as we have understood it so far, might be considered a particular memory architecture with a specific irreversible and non-ergodic structure to the evolving computation. This perspective could potentially pave the way for a more refined and deliberate understanding of future events, reducing events-in-time after the End of History to a specialized kind of computation.

Here are my clips that Chat GPT 4 used to generate the above condensation:

What if the Hegel-Fukuyama idea of the End of History is something like Darwin’s idea of natural selection?

The Darwinian century began with The Origin of the Species in 1859 and ended with the discovery of the structure of DNA in 1953. Humanity experienced an End of Biology moment somewhere between those bookend events…a demotion of the discipline from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter…Biology went from being an inscrutable aspect of providence to an emerging engineering discipline, subservient to physics and mathematics by way of chemistry.

By analogy, the End of History moment is something like an end to vitalist understandings of the logic of events-in-time…There is no role for divine agency, and no justification for assigning a particular positive or negative valence to apparent secular tendencies in the stream of events…The fact that the theory is historicist without being normative is perhaps what makes it so powerfully subversive. The End of History theory is the historicism that kills all other historicisms. Past the End of History, notions like progress must be regarded as analogous to notions like √©lan vital past the End of Biology. …it is undeniable that 30 years in, the End of History theory is still not particularly popular…One obvious reason is that it is non-constructive. It subverts a vitalist understanding of events in time without supplying a more parsimonious and generative alternative.

In Fukuyama’s theory, there are no notions comparable to variation and natural selection that allow us to continue making sense of events-in-time. There are no Mendelian clues pointing to something like a genetics of events-in-time. There is no latent Asimovian psychohistorical technology lurking in the details of the End of History theory…Perhaps one damn thing after another is where our understanding of events in time ought to end, for our own good.

I think this is too pessimistic though. Deeper logics of events-in-time abound in nature. Even biological evolution and thermodynamics, which are more elemental process aspects of reality, admit more than a one damn thing after another reading. History, as a narrower class of supervening phenomena that must respect the grammars of both, ought to admit more interesting readings, based on broadly explanatory laws that are consistent with both, but more specific than either. Dawkins’ memetic view of cultural evolution, and various flavors of social darwinism, constitute first-order attempts at such laws. Some flavors of cosmism and transhumanism constitute more complex attempts that offer hope of wresting ever-greater agency from the universe.

So what does explain the logic of events-in-time in a way that allows us to make sense of events-in-time past the End of History, in a way that improves upon a useless one damn thing after another sense of it, and says something more than the laws of evolution or thermodynamics?

I don’t have an answer, but I have a promising clue: somehow, events-in-time must be viewed as the trace of an irreversible, non-ergodic computation process, in which long-term collective memory plays a significant role.

History, as we have understood it so far, is something like a particular memory architecture that assumes a particular irreversible and non-ergodic structure to the evolving computation. The contingency and path dependence of events-in-time in human affairs is no reason to believe there cannot also be theoretical richness within the specificity. A richness that might open up futures that can be finely crafted with a psychohistorical deliberateness, rather than simply vaguely anticipated and crudely shaped. 

Perhaps, just as life after the End of Biology was reduced to a specialized kind of chemistry, events-in-time, after the End of History, can be reduced to a specialized kind of computation.

Friday, March 17, 2023

Is the hype over A.I. justified? Does it really change everything?

I know, I know…. We’re all tired of the hysteria resulting from the sudden appearance of ChatGPT-like programs that feel to us as though they are intelligent and are already shaping much of our lives. Still, I’m inclined to agree with Sundar Pichai, the chief executive of Google, who said in 2018 that the invention of AI is more profound than the discovery of fire and electricity. New York Times writer Ezra Klein also thinks that things will never be the same. Below are a few clips from his piece, and I focus on a section describing the thoughts of Meghan O'Gieblyn. I'm reading the book he mentions and highly recommend it.
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
...We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”
That is perhaps the weirdest thing about what we are building: The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us.
The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”
This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.
If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.
Katja Grace, an A.I. safety researcher, summed up this illogic pithily. Slowing down “would involve coordinating numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional.”
One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”

Wednesday, March 08, 2023

A skeptical take on the AI revolution

I want to pass on to MindBlog readers some clips I have made for my own use from the transcript of a podcast interview of Gary Marcus by Ezra Klein. These abstractings help me absorb the material better, and make it easier for me to revisit and recall the arguments at a later date. Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become a leading voice of not quite A.I. skepticism, but skepticism about the A.I. path we’re on. He has founded multiple A.I. companies himself. He thinks artificial intelligence is possible. He thinks it is desirable. But he doesn’t think that what we are doing now — making these systems that do not understand what they are telling us — is going to work out the way we are hoping it will. Here are the clips:
Marcus: the system underneath ChatGPT is the king of pastiche…to a first approximation, it is cutting and pasting things…There’s also a kind of template aspect to it. So it cuts and pastes things, but it can do substitutions, things that paraphrase. So you have A and B in a sequence, it finds something else that looks like A, something else that looks like B, and it puts them together. And its brilliance comes from that when it writes a cool poem. And also its errors come from that because it doesn’t really fully understand what connects A and B.
Klein: But … aren’t human beings also kings of pastiche? On some level I know very, very little about the world directly. If you ask me about, say, the Buddhist concept of emptiness, which I don’t really understand, isn’t my answer also mostly an averaging out of things that I’ve read and heard on the topic, just recast into my own language?
Marcus: Averaging is not actually the same as pastiche. And the real difference is for many of the things you talk about, not all of them, you’re not just mimicking. You have some internal model in your brain of something out there in the world…I have a model of you. I’m talking to you right now, getting to know you, know a little bit about your interests — don’t know everything, but I’m trying to constantly update that internal model. What the pastiche machine is doing is it’s just putting together pieces of text. It doesn’t know what those texts mean.
Klein: Sam Altman, C.E.O. of OpenAI, said “my belief is that you are energy flowing through a neural network.” That’s it. And he means by that a certain kind of learning system.
Marcus: …there’s both mysticism and confusion in what Sam is’s true that you are, in some sense, just this flow through a neural network. But that doesn’t mean that the neural network in you works anything like the neural networks that OpenAI has built..neural networks that OpenAI has built, first of all, are relatively unstructured. You have, like, 150 different brain areas that, in light of evolution and your genome, are very carefully structured together. It’s a much more sophisticated system than they’re using…
I think it’s mysticism to think that if we just make the systems that we have now bigger with more data, that we’re actually going to get to general intelligence. There’s an idea called, “scale is all you need.”..There’s no law of the universe that says as you make a neural network larger, that you’re inherently going to make it more and more humanlike. There’s some things that you get, so you get better and better approximations to the sound of language, to the sequence of words. But we’re not actually making that much progress on truth…these neural network models that we have right now are not reliable and they’re not truthful…just because you make them bigger doesn’t mean you solve that problem.
Some things get better as we make these neural network models, and some don’t. The reason that some don’t, in particular reliability and truthfulness, is because these systems don’t have those models of the world. They’re just looking, basically, at autocomplete. They’re just trying to autocomplete our sentences. And that’s not the depth that we need to actually get to what people call A.G.I., or artificial general intelligence.
Klein: from Harry Frankfurt paper called “On Bullshit”…“The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”…his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. ..bullshit just has no relationship, really, to the truth…what unnerves me a bit about ChatGPT is the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all.
…systems like these pose a real and imminent threat to the fabric of society…You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.
We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.
Russian trolls spent something like a million dollars a month during the 2016 election… they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.
…if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.
The Klein/Marcu conversation then moves through several further areas. How large language models can be used to craft responses that nudge users towards clicking on advertising links, the declining returns of bigger models that are not helping in comprehending larger pieces of text, the use of ‘woke’ guardrails that yield pablum as answers to reasonable questions, lack of progress in determining trustworthiness of neural network responses, the eventual possible fusion of neural network, symbol processing and rule generating systems, the numerous hurdles to be overcome before an artificial general intelligence remotely equivalent to ours is constructed.

Wednesday, March 01, 2023

Artificial intelligence and personhood

MindBlog hesitates to add to the feeding frenzy of articles about LLMs (large language models) such as Open AI’s ChatGPT and Microsoft Bing’s “Sydney,” but I want to pass on clips from a fascinating episode of Venkatesh Rao’s “Mediocre Computing” series, that suffers from logorrhea and could use some ruthless editing, but has some searing points to make, which I largely agree with. He starts by posing A.I. as another Copernican moment:
…stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind? In case you weren’t keeping track, here’s the current Copernican Moments list:
The Earth goes around the Sun,
Natural selection rather than God created life,
Time and space are relative,
Everything is Heisenberg-uncertain
“Life” is just DNA’s way of making more DNA,
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
… a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting…We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
…there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs (Large Language Models) to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.

Friday, January 13, 2023

Materialism meets transcendence

I want to pass on the URL to a PBS series by Alan Lightman that I plan to start watching as soon as I can, based on the following description and review in Science Magazine:
In part 1, “The Stars & The Osprey,” Lightman undergoes functional magnetic resonance imaging and interviews neuroscientist Robert Desimone about how much neuroimaging can tell us about Lightman’s transcendental experience. He ultimately finds this approach unsatisfying and introduces viewers to the debate between mechanists, who believe that the laws of physics, chemistry, and biology are sufficient to explain life, and vitalists, who believe that living creatures are imbued with an additional spiritual quality not explainable by science.
Here, he interviews biologist and Nobel laureate Jack Szostak and the Dalai Lama as proponents of these two camps, respectively. Although the Dalai Lama expresses enthusiasm for scientific investigation, most scientists will likely resonate with Szostak’s declaration that “It’s not just atoms and molecules, it’s the organization…it’s no less wonderful or beautiful because we understand that there is a natural origin for [life].”
Part 2, “The Big & The Small,” begins with the familiar “powers of 10” view of the Universe, moving from the subatomic to the galactic. Lightman then queries what such explorations have to do with consciousness, conversing with BINA48, an extraordinary humanoid robot programmed by artificial intelligence with >100 hours of a real woman’s memories. Here, he speculates that BINAs of the future may achieve consciousness. He then probes this issue over Zoom with the Dalai Lama, whom viewers observe watching a movie of BINA48 conversing with the woman from whom BINA48 was programmed; this multilayered interaction is simultaneously disconcerting, comical, and wondrous. After additional interviews with a bioethicist, a rabbi, and others, Lightman ultimately concludes that we may be just atoms and molecules, but, in the words of Emily Dickinson, “The brain is wider than the sky.”
In part 3, “Homo Techno,” Lightman contemplates our inner world of perception, consciousness, and self-awareness. He meets Erik Sorto, who lost all movement from the neck down after being shot in the back. With electrodes implanted into his posterior parietal cortex and 2 years of training, Sorto learned to control the movement of a robotic arm by thought. This is an extraordinary achievement, and Lightman posits that it is an example of the beginning of our transition from Homo sapiens into Homo techno, part-human, part-machine entities that reflect the modification of human evolution by technological means. From an actual evolutionary biology standpoint, this is nonsensical, and it is unclear that Lightman even means to propose such an idea, but this section’s loose language will rankle some viewers.
At another point in the series’ final episode, Lightman finds himself dizzy from talk of neurons and galaxies and takes refuge in closely examining a single square inch of earth. His biophilia is obviously meaningful to him, and it would have been stimulating had the episode included interviews with an evolutionary biologist or naturalist, who might have helped to better articulate this facet of the human experience. The series closes, appropriately, with philosophical musings about the need for each of us to find meaning for ourselves.
A small problem at the outset is the inherent impossibility of conveying transcendence through description—the degree to which viewers relate to Lightman’s moment of enlightenment will depend on their own experiences and inclinations. Additionally, Lightman’s screen persona leaves something to be desired. There are, however, few people better qualified to explore these issues, and as the series progresses, his humanity shines through, bringing a welcome lightness to some potentially ponderous material.
Despite its focus on phenomena currently unexplainable by science, Searching is full of the joy and passion that can be found in the doing of science and succeeds in conveying how deeply meaningful science is to its practitioners. It is well worth your time and is especially recommended to families with kids curious about life and our world.