Showing posts with label human evolution. Show all posts
Showing posts with label human evolution. Show all posts

Wednesday, February 28, 2024

What Is a Society?: The Importance of Building an Interdisciplinary Perspective

I'm passing on the abstract I just received of a forthcoming article in Behavioral and Brain Sciences that I am starting to have a look through. Motivated readers can obtain a PDF of the article by emailing me.
Abstract: I submit the need to establish a comparative study of societies, namely groups beyond a simple, immediate family that have the potential to endure for generations, whose constituent individuals recognize one another as members, and that maintain control over access to a physical space. This definition, with refinements and ramifications I explore, serves for cross-disciplinary research since it applies not just to nations but to diverse hunter-gatherer and tribal groups with a pedigree that likely traces back to the societies of our common ancestor with the chimpanzees. It also applies to groups among other species for which comparison to humans can be instructive. Notably, it describes societies in terms of shared group identification rather than social interactions. An expansive treatment of the topic is overdue given that the concept of a society (even the use of such synonyms as primate "troop") has fallen out of favor among biologists, resulting in a semantic mess; while sociologists rarely consider societies beyond nations, and social psychologists predominantly focus on ethnicities and other component groups of societies. I examine the relevance of societies across realms of inquiry, discussing the ways member recognition is achieved; how societies compare to other organizational tiers; and their permeability, territoriality, relation to social networks and kinship, and impermanence. We have diverged from our ancestors in generating numerous affiliations within and between societies while straining the expectation of society memberships by assimilating diverse populations. Nevertheless, if, as I propose, societies were the first, and thereafter the primary, groups of prehistory, how we came to register society boundaries may be foundational to all human "groupiness." A discipline-spanning approach to societies should further our understanding of what keeps societies together and what tear them apart.

Wednesday, February 21, 2024

AI makes our humanity matter more than ever.

I want to pass on this link to an NYTimes Opinion Guest essay by Aneesh Raman, a work force expert at LinkedIn,  and

Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”

The knowledge economy that we have lived in for decades emerged out of a goods economy that we lived in for millenniums, fueled by agriculture and manufacturing. Today the knowledge economy is giving way to a relationship economy, in which people skills and social abilities are going to become even more core to success than ever before. That possibility is not just cause for new thinking when it comes to work force training. It is also cause for greater imagination when it comes to what is possible for us as humans not simply as individuals and organizations but as a species.

Friday, February 16, 2024

An agent-based vision for scaling modern AI - Why current efforts are misguided.

I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:

…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.


…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching


What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights


…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.

Scaling Precedents from Biology

There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.

Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….

If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.

The Road to Muddledom

Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).

Muddling Doctrines

How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.

Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.

Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.

Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.

Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.


The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics

The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.

As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.

Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.

Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.

When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.

Beyond Training Determinism

…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.

…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.

Muddling Through vs. Godding Through

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.

To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling.

..in thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.

This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.

Protocols for Massed Muddling

How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)

When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).

While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.

How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.

Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.

…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.

…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.

This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.

This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.

Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.

…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

And then we can have a whole new culture war about whether that’s a good thing.

Wednesday, February 14, 2024

How long has humanity been at war with itself?

I would like to point MindBlog readers to an article by Deborah Barsky with the title of this post. The following clip provides relevant links to the Human Bridges project of the Independent Media Institute. 

Deborah Barsky is a writing fellow for the Human Bridges project of the Independent Media Institute, a researcher at the Catalan Institute of Human Paleoecology and Social Evolution, and an associate professor at the Rovira i Virgili University in Tarragona, Spain, with the Open University of Catalonia (UOC). She is the author of Human Prehistory: Exploring the Past to Understand the Future (Cambridge University Press, 2022).

Wednesday, February 07, 2024

Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment

I pass on to MindBlog readers the abstract of a recent Behavioral and Brain Science article by Sijilmassi et al. titled "‘Our Roots Run Deep’: Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment."  Motivated readers can obtain a PDF of the article from me. 

One of the most remarkable manifestations of social cohesion in large-scale entities is the belief in a shared, distinct and ancestral past. Human communities around the world take pride in their ancestral roots, commemorate their long history of shared experiences, and celebrate the distinctiveness of their historical trajectory. Why do humans put so much effort into celebrating a long-gone past? Integrating insights from evolutionary psychology, social psychology, evolutionary anthropology, political science, cultural history and political economy, we show that the cultural success of historical myths is driven by a specific adaptive challenge for humans: the need to recruit coalitional support to engage in large scale collective action and prevail in conflicts. By showcasing a long history of cooperation and shared experiences, these myths serve as super-stimuli, activating specific features of social cognition and drawing attention to cues of fitness interdependence. In this account, historical myths can spread within a population without requiring group-level selection, as long as individuals have a vested interest in their propagation and strong psychological motivations to create them. Finally, this framework explains, not only the design-features of historical myths, but also important patterns in their cross-cultural prevalence, inter-individual distribution, and particular content.

Friday, February 02, 2024

Towards a Metaphysics of Worlds

I have a splitting headache from having just watched a 27 minute long YouTube rapid fire lecture by Venkatesh Rao, given last November at the Autonomous Worlds Assembly in Istanbul (part of DevConnect, a major Ethereum ecosystem event).  His latest newsletter “Towards a Metaphysics of Worlds” gives adds some notes and context, and gives a link  to its slides. As Rao notes:

“This may seem like a glimpse into a very obscure and nerdy subculture for many (most?) of you, but I think something very important and interesting is brewing in this scene and more people should know about it.”

I would suggest that you to skip the YouTube lecture and cherry pick your way through his slides.  Some are very simple and quite striking, clearly presenting interesting ideas about the epistomology, ontology, and definitions of worlds.  Here is Slide 11, where what Rao means by "Worlds" is made more clear:

Sunday, January 21, 2024

Titles and URLs for key MindBlog posts on selves

I pass on a chronological list of titles and URLs of MindBlog posts assembled in preparation for a video chat with a European MindBlog reader:

An "Apostle's Creed" for the humanistic scientific materialist?
https://mindblog.dericbownds.net/2006/03/apostles-creed-for-humanistic.html

Some rambling on "Selves" and “Purpose”
https://mindblog.dericbownds.net/2007/10/some-rambling-on-selves-and-purpose.html

Self, purpose, and tribal mentality as Darwinian adaptations (or…Why why aren’t we all enlightened?)
https://mindblog.dericbownds.net/2020/05/self-purpose-and-tribal-mentality-as.html

MindBlog passes on a note: on the relief of not being yourself
https://mindblog.dericbownds.net/2020/03/mindblog-passes-on-note-on-relief-of.html

Points on having a self and free will.
https://mindblog.dericbownds.net/2021/03/points-on-having-self-and-free-will.html

I am not my problem
https://mindblog.dericbownds.net/2022/06/i-am-not-my-problem.html

The non-duality industry as a panacea for the anxieties of our times?         https://mindblog.dericbownds.net/2022/11/the-non-duality-industry-as-panacea-for.html

Enlightenment, Habituation, and Renewal - Or, Mindfulness as the opiate of the thinking classes?
https://mindblog.dericbownds.net/2023/01/enlightenment-habituation-and-renewal.html

A quick MindBlog riff on what a self is….
https://mindblog.dericbownds.net/2023/01/a-quick-mindblog-riff-on-what-self-is.html

MindBlog paragraphs bloviating on the nature of the self ask Google Bard and Chat GPT 4 for help
https://mindblog.dericbownds.net/2023/05/mindblog-paragraphs-bloviating-on.html

A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4
https://mindblog.dericbownds.net/2023/07/a-mindblog-paragraph-on-non-dual.html

Constructing Self and World
https://mindblog.dericbownds.net/2023/09/constructing-self-and-world.html  

Anthropic Claude's version of my writing on the Mind - a condensation of my ideas
https://mindblog.dericbownds.net/2023/10/anthropic-claudes-version-of-my-writing.html  

A Materialist's Credo
https://mindblog.dericbownds.net/2023/10/a-materialists-credo.html

How our genes support our illusory selves - the "Baldwin effect"
https://mindblog.dericbownds.net/2023/11/how-our-genes-support-our-illusory.html











.



















.








 


Monday, January 01, 2024

On shifting perspectives....

I pass on clips from a piece in the 12/202/23 Wall Street Journal by Carlo Rovelli, the author, most recently, of ‘ White Holes: Inside the Horizon’

Somnium

By Johannes Kepler (1634)

1 Perhaps the greatest conceptual earthquake in the history of civilization was the Copernican Revolution. Prior to Copernicus, there were two realms: the celestial and the terrestrial. Celestial things orbit, terrestrial ones fall. The former are eternal, the latter perishable. Copernicus proposed a different organization of reality, in which the sun is in a class of its own. In another class are the planets, with the Earth being merely one among many. The moon is in yet another class, all by itself. Everything revolves around the sun, but the moon revolves around the Earth. This mad subversion of conventional reason was taken seriously only after Galileo and Kepler convinced humankind that Copernicus was indeed right. “Somnium” (“The Dream”) is the story of an Icelandic boy—Kepler’s alter ego—his witch mother and a daemon. The daemon takes the mother and son up to the moon to survey the universe, showing explicitly that what they usually see from Earth is the perspective from a moving body. Sheer genius.

History

By Elsa Morante (1974)

2 This passionate and intelligent novel is a fresco of Italy during World War II. “La Storia,” its title in Italian, can be translated as “story” or “tale” as well as “history.” Elsa Morante plumbs the complexity of humankind and its troubles, examining the sufferings caused by war. She writes from the view of the everyday people who bear the burden of the horror. This allows her to avoid taking sides and to see the humanity in both. The subtitle of this masterpiece—“a scandal that has lasted for ten thousand years”— captures Morante’s judgment of war, inviting us to a perspective shift on all wars.

Collected Poems of Lenore Kandel

By Lenore Kandel (2012)

3 Lenore Kandel was a wonderful and underrated poet who was part of the Beat-hippie movement in California. The tone of her poems varies widely, from bliss to desperation: “who finked on the angels / who stole the holy grail and hocked it for a jug of wine?” She created a scandal in the late 1960s by writing about sex in a strong, vivid way. Her profoundly anticonformist voice offers a radical shift of perspective by singing the beauty and the sacredness of female desire.

Why Empires Fall

By Peter Heather and John Rapley (2023)

4 As an Italian, I have long been intrigued by the fall of the Roman Empire. Peter Heather and John Rapley summarize the recent historiographic reassessments of the reasons for the fall. Their work also helps in understanding the present. Empires don’t necessarily collapse because they weaken. They fall because their success brings prosperity to a wider part of the world. They fall if they cannot adjust to the consequent rebalancing of power and if they try to stop history with the sheer power of weapons. “The easiest response to sell to home audiences still schooled in colonial history is confrontation,” the authors write. “This has major, potentially ruinous costs, compared to the more realistic but less immediately popular approach of accepting the inevitability of the periphery’s rise and trying to engage with it.”

The Mūlamadhyamakakārikā

By Nāgārjuna (ca. A.D. 150)

5 This major work of the ancient Indian Buddhist philosopher Nāgārjuna lives on in modern commentaries and translations. Among the best in English is Jay L. Garfield’s “The Fundamental Wisdom of the Middle Way” (1995). Nāgārjuna’s text was repeatedly recommended to me in relation to my work on the interpretation of quantum theory. I resisted, suspicious of facile and often silly juxtapositions between modern science and Eastern philosophy. Then I read it, and it blew my mind. It does indeed offer a possible philosophical underpinning to relational quantum mechanics, which I consider the best way to understand quantum phenomena. But it offers more: a dizzying and captivating philosophical perspective that renounces any foundation. According to this view, the only way to understand something is through its relation with something else—nothing by itself has an independent reality. In the language of Nāgārjuna, every thing, taken by itself, is “empty,” including emptiness itself. I find this a fascinating intellectual perspective as well as a source of serenity, with its acceptance of our limits and impermanence.

 

 

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.

 

Friday, November 03, 2023

How our genes support our illusory selves - the "Baldwin effect"

I have been thinking about giving more explanation of a paragraph from my Oct. 25 post, A Materialist's Credo:
Modern neuroscience has proved that our experienced perceptions of sensing and acting are these predictions. They are fantasies, or illusions, as is our sense of having a self with agency that experiences value, purpose, and meaning. Everything we do and experience is in the service of reducing surprises by fulling these fantasies. An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language.

The sentence "An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language." is invoking what is called "The Baldwin Effect" or Baldwin hypothesis. 

I decided to turn to Chat GPT-4 for some help.

Deric:  

What is the "Baldwin effect" in evolutionary biology, particularly with respect to human evolution?

ChatGPT: 

The "Baldwin effect" describes a mechanism by which learned behaviors can eventually become inherited traits through the process of natural selection. Named after the psychologist James Mark Baldwin, who proposed the idea in the late 19th century, this concept bridges the gap between phenotypic plasticity (the ability of an organism to change its phenotype in response to changes in the environment) and genetic evolution.

Here's a simplified overview of how the Baldwin effect works:
    1    Initial Variation: Within a population, there are variations in behavior. Some individuals can learn a new beneficial behavior in response to a change in the environment, while others cannot.
    2    Learning as a Temporary Solution: The ability to learn this behavior offers a short-term advantage. Those who can learn the new behavior have a better chance of surviving and reproducing.
    3    Genetic Fixation Over Time: Over generations, there can be genetic changes that make this learned behavior easier to acquire or even innate. As those with a predisposition to the behavior have a better survival and reproductive rate, genes that support the behavior become more common.
    4    Reduced Reliance on Learning: As the behavior becomes more genetically rooted, the population may become less reliant on learning the behavior from scratch. Eventually, what was once a learned behavior can manifest itself as an instinctual behavior.

With respect to human evolution, one could theorize the Baldwin effect playing a role in various capacities. For example, our ancestors might have learned specific skills, like tool use or certain social behaviors, that provided survival advantages. Over time, genetic predispositions towards these skills could have developed, leading to humans having innate tendencies or quicker learning capacities in these areas.

However, it's essential to note that proving the Baldwin effect in human evolution is challenging due to the intricacies of genetic and cultural evolution. Still, it offers an exciting perspective on the interplay between learning and genetics.

Deric: 

How might the Baldwin effect hypothesis be applied to the human invention of an illusory "I" or "self" capable of forming social groups, whose cohesion is reinforced by emotional behaviors driven by evolved neuroendocrine mechanisms?

ChatGPT:

The concept of an illusory "I" or "self" is deeply rooted in both philosophical discourses and cognitive science studies. When considered in the context of the Baldwin effect, the evolution of such a self-concept could be framed as an adaptive trait in the intricate web of human social structures. Here's one way to interpret this:   

    1    Initial Variation and Learning: Early human communities might have had members who had stronger self-awareness or more robust personal identities than others. Such individuals might have been better at recognizing their roles within groups, remembering past interactions, and predicting future interactions. This self-awareness might initially emerge as a learned or nurtured cognitive trait.
    2    Social Advantages: Individuals with a pronounced sense of self could have been more adept at forming and maintaining social bonds, making them crucial members of their communities. They might have been better at understanding social hierarchies, reciprocity, and the nuances of cooperation, which are essential for the survival and success of early human societies.
    3    Emotional Behaviors and Neuroendocrine Mechanisms: As these individuals navigated their social landscapes, their brains could have evolved neuroendocrine mechanisms to support the complex emotions related to social interactions—empathy, jealousy, gratitude, shame, etc. These emotions, in turn, would reinforce the cohesion and stability of social groups, ensuring the group's collective survival and success.
    4    Genetic Fixation Over Time: Over generations, the capacity for a strong self-concept and the accompanying emotional depth might become increasingly genetically ingrained. This evolution would mean that newer generations would be born with an innate predisposition towards these traits, reducing the learning curve required to acquire them.
    5    Cultural Evolution: Parallel to the genetic fixation, cultural mechanisms might evolve to further reinforce the idea of the self and its importance in the society—through stories, traditions, rituals, etc. The interplay between genetic predispositions and cultural learning might lead to the establishment of the 'self' as a foundational concept in human societies.

In this context, the Baldwin effect would suggest that our ancestors' learned behavior—forming a concept of self for social advantages—over time, became a genetically supported predisposition. This evolution would mean that the complex cognitive and emotional behaviors supporting a strong self-concept became more instinctual and less reliant on individual learning.

Monday, October 02, 2023

Fluid Fogs and Fixed Flows

For the MindBlog readers who may have have noted my  "This is the New 'Real World' post and followed me down the rabbit hole of Ventkatesh Rao’s narrative worlds, I pass on the following abstracting  of his 9/23/2023  installment, titled “Fluid Fogs and Fixed Flows,”  which reduces its length by half. I have done this mainly for my own use, to facilitate my future recall of his ideas:

Worlds and Real World

To briefly recap last week’s essay, I’m using world and real world in the sense of last week’s essay: A world is a coherent, immersive, totalizing subjectivity you can inhabit, as a sort of cognitive indoors. The real world is the corresponding outdoors — the messy union of the dozen or so most consequential worlds in existence at any given time.

The process by which the real world emerges, as a negotiation among worlds, is one that makes it qualitatively different. In brief, regular worlds are finite games, while the real world is the infinite game.

Weirdness, Fog, and Unnarratability

The relationship between weirdness, brain fog, and unnarratability is something like the relationship between a crisis, the panic it induces, and the solvability of the crisis.

World-brain fog affects those in a given world. Real-world-brain fog affects everybody. For us individual sentient elements of these world-brains, this fog manifests as the spectacle of history becoming incoherent.

Fog vs. Flow

The opposite of brain fog is flow. When thoughts flow easily, clearly, and tastefully, from one idea to the right next idea...Where the one-step-at-a-time ethos I identified earlier in this series as the essence of never-ending stories is not just all you can live by, it’s all you need to live by.

To be clear, I’m not saying fog is bad and flow is good. That would be like saying clear weather is good and storms are bad. This is just a pair of opposed subjective cognitive conditions. Setting aside valuative judgments, my claim is that the real-world-brain is in a state of fog rather than flow right now.

To say that the real world is suffering from world-brain fog is to say that the infinite game is in narrative trouble, regardless of the prevailing calculus of winners and losers in the constituent finite games. The question of winners and losers is distinct from the question of whether the game can be continued. The real world being foggy means it is hard to see how to continue the game, whether you’re currently winning or losing any specific game.

Okay, so that’s the big thesis here: history feels far more unnarratable than it needs to, because the real world is suffering from world-brain fog. If we can get rid of this fog, we’ll still have to deal with the objective realities of the permaweird, but that might be an easier task.

Individual Fogs

To think through the idea of a foggy real-world brain, it’s useful to think about the more familiar individual-brain phenomenon.

I’ll use myself as an example to analyze these factors...Looking back 10 years at my 2013 archives, 2023’s output of words feels like a congealed sludge by comparison. ..The sludginess of 2023 seems to afflict all words being produced by everybody.

In the last couple of years, this god-like situation awareness of the broad currents of my own thought has become dissipated, fragmented, and yes, foggy. I often forget obvious connections I’ve made in the past, or worse, fail to make them in the first place. Sometimes I forget entire trails of thought I’ve gone down, over multiple essays. Sometimes I clumsily re-think thoughts I’ve previously thought in more elegant ways. There is no sense of a longer compounding journey unfolding over years and millions of words. Instead, there is a sense of a random walk comprising individual short outings of a few thousand words. When the fog is at its worst, the 2 million words seem like so much rubble.

So my individual brain fog in the sense of such missed connections and missed opportunities for emergence is bad for the kind of thinking and writing I do. The fog/flow pair is neutral, but for certain kinds of activity, such as thinking and writing in hypertext-augmented ways, fog is very bad. Just like literal fog is very bad for ships trying to navigate treacherous waters.

The largest fraction of the value of writing online in a densely internally linked way³ lies in the larger structures of thought that emerge. I’ve previously jokingly referred to my personal instance of this as the “extended ribbonfarm blogamatic universe,” but more seriously and generally, we might call these personal protocol narratives. It’s a particular way of being in the infinite game, one step at a time, that’s available to writers. Anyone who writes primarily online, with a lot of nonlinear internal hyperlinking, has a personal protocol narrative by default. Traditional writers usually don’t, unless they work extra hard for it⁴ (something I'm too lazy to do, which makes me think in a non-internet world, I wouldn’t be a writer).

This superpower is the reason people like me eschew traditional media and book-writing and largely stick to blogs, microblogs, and newsletters. Not only is the emergent landscape the bulk of the value, it is the main enabling factor in its own creation. I can write in certain ways because I have this evolving canvas doing most of the work. If this emergent landscape of thought starts to disappear, the whole thing falls apart.

And while hypertext is a powerful brain-augmentation technology, it can’t defend against all cognitive afflictions. In particular, brain fog is deadly. It weakens your ability to make new internal links, and as a result makes the connected landscape less connected, and therefore both less useful, and less usable. Brain fog drives a vicious cycle of degeneration towards a more primitive textuality. At some point, I might have no technical advantage at all over book-writing cavemen or even typewriter-wielding Neanderthals.

Entangled Fogs

Some technologies are simply foggier than others…mail newsletter platforms are much foggier than blogs…blogs simply want to create rich internal linking..I use an order of magnitude fewer links in newsletters than in blog posts. I know this because I still retain stronger gestalt memories of my blog archives than my newsletter archives.

Biology and technology conspire to create brain fog in messy ways. When I got Covid a year ago, and experienced a few months of a more biological style brain fog, writing in my peculiar way felt insanely difficult, and what writing I was able to do was much more disconnected than my norm…much of brain fog can and should be attributed to factors in the environment. Just as your panic at being caught in a fire isn’t entirely in your head — there is actually a fire — brain fog isn’t all in your head: you’re in a foggy condition. You’re in an unnarratable world. The stories that you want to tell, and are used to telling, are suddenly less tellable.

This is where the entanglement with world-brain fog comes in.

Accounting for age, medium, and Covid-type effects, I think there remains a large unexplained factor in every case, though the fraction varies….I think there is something going on at the cultural, societal level, that makes it vastly harder to remember the gist of large bodies of information…But if I am right, unnarratability and world-brain fog should affect everybody, regardless of age and occupation, and I think I see signs everywhere that this is the case.

Fixed and Fluid Logics of Caring

Now we can ask the question. What does it mean for a world, specifically the real world to experience something analogous to what I just described at the individual level? What is world-brain fog? …And since there is nobody “there” to experience it, how does it manifest in the lives of us individuals who are like the neurons of the world brains we inhabit?

We’ve already seen one element of what it feels like. A sense that there’s more fogginess than you can attribute to your own circumstances…Here’s another: it’s hard to decide what to care about. Logics of caring are in fact essential in creating flow out of fog. The world is always complex. What you care about is what determines how simple it is for you. How you pick what to care about is your logic of caring.

…you might want a locus of care that is both stable, and world-like. This disposition is what I’m calling fixed logic of care…People with fixed logics of care love to talk about values, because that’s where the fixedness manifests explicitly.

…you might want a locus of care that follows the liveliest streams of events in the world. …You want to be where the action is, not where your values point. This disposition is a fluid logic of care.

fixed/fluid not the same as conservative/liberal,traditional/progressive, winning/losing

It might seem like I’ve set up an argument that admits no world-scale flow at all for either fixed or fluid logics of caring. This is incorrect. A few well-defined groups sneak through this sieve of constraints and appear to be experiencing world-scale flow. All of them operate by fixed logics of caring, but also have an additional factor in common: they rest atop what I call interposing intelligences.

Interposing Intelligences

The first well-defined group that seems to have retained a sense of world-scale flow is economists…anyone for whom the the global economy is the primary sense-making lens on reality…it’s all just been a game of watching various numbers go up and down in a curiously ahistorical mirror world. In that mirror world, there has been no Great Weirding.

There’s a reason for this. The economy offers one of the few remaining world-scale fixed logics of caring. To care through that logic about anything in the world at all is to care about it in economic terms. There’s even a term for this operation of bringing a matter into the fixed logic of care: “pricing it in.” To the economist-mind, economics is the primary phenomenological ground of the world. Things don’t become real until they become economically real. Intentions don’t become real until they become revealed preferences. Narratives don’t become real until they show up in indicators.

Now this is interesting. Economics seems to function in modernity as a better religion than actual religions. It allows you to have a sense of inhabiting the world rather than a besieged, paranoid corner of it. It allows you to care about the world in a fixed way, while still keeping up reasonably with its fluid, dynamic, changing nature. What it cannot accommodate, it can confidently treat as noise.

Unlike the changeless, distant gods or Gods of traditional religions, the God of economics is a live intelligence, doing all the fluid thinking so you can care in fixed ways. And it’s obvious why it’s able to do that. The economy is, after all, the closest thing to a live, planet-scale artificial intelligence.A different way to think about this helps generalize from this one data point. Economics provides a fixed logic of caring despite a complex, Permaweird world because it rests atop a vast, interposing⁵ intelligence that processes or tunes out most of the weirdness. A kind of intelligence that religion once embodied in a slower, less weird era. A Turing-complete pre-processing/post-processing layer, mediating your relationship to reality. I’m using the term interposing intelligence rather than container or matrix because the mediation has a more open and leaky topology. It allows you to compute with reality data more easily, but doesn’t necessarily seal you off in a bubble reality. Interposing intelligences are more like slippers than spacesuits; more like gardens than arcologies.

The cryptoeconomy is another obvious example, with blockchains playing the role of the interposing intelligence.

A third world is the world of machine learning, which is a rather on-the-nose kind of interposing intelligence layer. … There is a new world of people being born, whose relationship to reality is beginning to be entirely mediated by the interposing intelligence of machine learning.  

A fourth world is perhaps the oldest: the global institutional landscape peopled by careerists devoted to individual institutions. It’s not as obvious as in the case of the economy, but the institutional world (which its critics often refer to as the global Deep State) and its inhabitants (whom critics tend to refer to uniformly as “bureaucrats”) is in fact a world-scale computer that sustains a fixed logic of caring within itself. Shorn of the conspiratorial fantasies of critics, deep state is not a bad term for it.

Is there a way to hold on to a fixed logic of caring, without retreating from the world, and without resting on top of an interposing intelligence? I don’t think this is possible anymore.

Find Fluidity

The problem with everybody switching to fixed logics of caring is that it doesn’t solve the fogginess of the real world. In fact, even if all dozen or so consequential worlds that make up the real world were to harden into de facto fixed-logics-of-caring worlds that individually found flow within, you would still not be free of the fog in the real world. Combating fog in the real world requires at least a fraction of humanity operating by fluid logics of caring.

to want a fluid logic of care is to want “a locus of care that follows the liveliest streams of events in the world. …it used to work well until about 2015,

You could care about tech, for example. What was good for tech was good for the world, and vice versa. But unlike economics, tech does not offer a fixed logic for how to care.
Cosmopolitan globalism was another. Pre-wokism social justice was a third. Following basic scientific advances was a fourth.

But all these examples have “failed” in a certain way since 2015. You can still operate by them, but you will get lost in fog and lose all sense of flow. As a result, all these example worlds have succumbed to sclerotic fixed logics imported from adjacent domains. Technology is increasingly applied investment economics. Cosmopolitan globalism and social justice are now both applied Deep Statisms. No doubt other once-fluid logics of caring will get “compiled,” as it were, to fixed logics of caring running atop interposing intelligence layers.

So is there a way to retain a fluid logic of caring?
Reality — and this time I mean material reality — does indeed have a liberal bias in a rather precise
sense: it requires fluid logics of caring to de-fog. A logic of caring that follows the action instead of being driven by values.

No combination of fixed logics of caring will do the trick. Nor will operating atop a fixed interposing intelligence layer.

Multiple Interposing Intelligences

My big takeaway from the analysis so far is this: there is no way to retain flow in the world today without augmenting your intelligence in some way. This is evident even in my personal, rather primitive case of using hypertext as an integral element of my writing and sensemaking.

This is why all known examples of worlds in flow today rest atop powerful interposing intelligence layers that mediate relations to reality: the economy, blockchains, AI itself, and institutions. But the inescapable cost of this seems to be that fluid logics of caring become fixed, and our sense of the real world, as opposed to our favored individual ones, becomes vulnerable to fog.

To retain fluidity, you must retain an unmediated connection to reality. But the unaugmented brain is clearly not enough for that connection to be tractable to manage.

How do you resolve this paradox?

I think the trick is to inhabit more than one interposing intelligence layer. If you’re only an economist or only a deep-state institutionalist, you’ll retreat to a fixed logic of caring; a terminal derp.

But if you’re both, the possibility of fluid logics of caring remains, because the two interposing varieties of intelligence are not compatible. Sometimes they will be in conflict when they try to mediate your presence in the world. And in that conflict, you will find just enough of an unmediated connection to reality to continue caring about the world in a fluid way, without becoming overwhelmed by complexity.

A specific example of this is thinking about holding the stock of a company you work for. Both economic and institutional logics of caring apply, but neither provides a complete answer to the question of how much of the stock to hold, and when to sell. The two fixed answers will likely be incompatible, so you’ll need a fluid logic to integrate them. If you’re in the public sector, voting on taxes creates a similar tension.

I listed 4 world-scale interposing intelligences earlier, and each pairing seems to work well. Cryptoeconomics and traditional economics seem caught in a dialectic of discovering each other’s fundamental flaws and destablizing each other. Machine learning and blockchains seem headed for a collision via zero-knowledge proof technologies. Institutionalism and blockchains seem headed for a collision via smart contract technology. Institutionalism and economics have been the locus of the familiar Left/Right tension across the world.

I’ll let you work out the other combinations, but if you’ve tried thinking about the world through any two of the available interposing intelligences, you’ll realize how difficult it is. Difficult, but it’s possible. And at least in my case, the more I practice, the better I get at this (I try to straddle all four of the ones I’ve listed).

Why does this work? Why does it serve to “continue the game” in infinite game terms? One way to think about it is to think about life in terms of step-by-step decisions.

If you live within a traditional world that does not supply an interposing intelligence layer at all, you will mostly not have any decision-support at all that can keep up. Your decisions outside your shrinking world will be random and chaotic. Your instinct will be to restrict scope until all decisions are within the scope of your logic of caring, whether fluid or fixed.

If you live atop a single interposing intelligence, you will always have meaningful decision-support within a fixed logic of caring. You’ll have a take on everything, nad feel in flow within your world, but have a sense of the “real world” you share with others being in a state of insane chaos. It would all make sense and flow beautifully if only those others stopped being stupid and joined your world.

But if you live atop more than one interposing intelligence, you will have to choose at every step whether to tap into one of the available fixed logics of caring (picking a side), or improvising your own choice. In the latter case, your thinking will leak through and connect to reality in an unmediated way. If you’re able to do this consistently, you will likely experience increasing amounts of flow, and also beat back the fogginess of the real world, not just your own world.

And this notion of straddling a sort of plate-tectonics of multiple interposing intelligences, with gaps, faultlines and inconsistencies, is the reason the resulting narrative is a kind of protocol narrative. The narrative of the real world emerges out of an interoperable network of world narratives. And through the conflicts between worlds, the infinite game keeps renewing itself.

But it takes a critical mass of humans operating by fluid logics of caring for this to happen. And until that critical mass is reached, the real world will remain foggy for everybody. And trying to be in that minority will be a thankless and stressful task, immersed in fog.
But then again, public service has never been an easy calling.


Friday, September 15, 2023

What we seek to save when we seek to save the world

Yet anoather fascinating set of ideas from Venkatesh Rao that I want to save for myself by doing a MindBlog post of some clips from the piece.
...threats that provoke savior responses are generally more legible than the worlds that the saviors seek to save, or the mechanisms of destruction...I made up a 2x2 to classify the notions of worlds-to-save that people seem to have. The two axes are biological scope and temporal scope...Biolocial scope is the 'we' - the range of livings beings included as subjects in the definition of 'world'...Temporal scope is the range of time over which any act of world-saving seeks to preserve a historical consciousness associated with the biological scope. Worlds exist in time more than they do in space.
Constructing a 2x2 out of the biological and temporal scope dimensions we get the following view of worlds-to-save (blue), with representative savior types (green) who strive to save them.
Deep temporal scope combined with a narrow biological scope gives us civilizations for worlds, ethnocentrists as saviors. ..The End of the World is imagined in collapse-of-civilization terms.
Shallow temporal scope combined with a broad biological scope gives us technological modernity for a world, and cosmopolitans for saviors. A shallow temporal scope does not imply lack of historical imagination or curiosity. It merely means less of history being marked for saving...The End of the World is imagined in terms of rapid loss of scientific knowledge and technological capabilities.
Shallow temporal scope combined with narrow biological scope gives us a world defined by a stable landscape of modern nations...The End of the World is imagined in terms of descent to stateless anarchy. Failure is imagined as a Hobbesian condition of endemic (but not necessarily primitive or ignorant) warfare.
...the most fragile kind of world you can imagine trying to save: one with both a broad biological scope, and a deep temporal scope. This is the world as wildernesses...The End of the World is imagined in terms of ecological devastation and reduction of the planet to conditions incapable of sustaining most life. Failure is imagined in terms of forced extreme adaptation behaviors for the remnants of life. A rather unique version of this kind of world-saving impulse is one that contemplates species-suicide: viewing humans as the threat the world must be saved from. Saving the world in this vision requires eliminating humanity so the world can heal and recover.
I find myself primarily rooting for those in the technological modernity quadrant, and secondarily for those in the wildernesses quadrant. I find myself resisting the entire left half, but I’ve made my peace with their presence on the world-saving stage. I’m a cosmopolitan with Gaian tendencies.
I think, for a candidate world-to-save to be actually worth saving, its history must harbor inexhaustible mysteries. A world whose past is not mysterious has a future that is not interesting. If a world is exhausted of its historical mysteries, biological and/or temporal scope must be expanded to remystify and re-enchant it. This is one reason cosmopolitanism and the world-as-technological-modernity appeal to me. Its history is fundamentally mysterious in a way civilizational or national histories are not. And this is because the historical consciousness of technological modernity is, in my opinion, pre-civilizational in a way that is much closer to natural history than civilization ever gets.
For a cosmopolitan with Gaian tendencies, to save the modern world is to rewild and grow the global web of already slightly wild technological capabilities. Along with all the knowledge and resources — globally distributed in ways that cannot be cleanly factored across nations, civilizations, and other collective narcissisms — that is required to drive that web sustainably. And in the process, perhaps letting notions of civilization — including wishful notions of regulating and governing technology in ‘human centric’ ways — fall by the wayside if they lack the vitality and imagination to accommodate technological modernity

Tuesday, August 15, 2023

Human History gets a rewrite.

I want to point to two articles I have enjoyed reading, both describing the recent book by Graeber and Wengrowa; “The Dawn of Everything: A New History of Humanity.” The review by Deresiewicz is in The Atlantic Magazine, and The New Yorker Review " is by Lewis-Krause. Some clips from Deresiewicz:
The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally...The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).
It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on. Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring.
Is “civilization” worth it, the authors want to know, if civilization—ancient Egypt, the Aztecs, imperial Rome, the modern regime of bureaucratic capitalism enforced by state violence—means the loss of what they see as our three basic freedoms: the freedom to disobey, the freedom to go somewhere else, and the freedom to create new social arrangements? Or does civilization rather mean “mutual aid, social co-operation, civic activism, hospitality [and] simply caring for others”?
These are questions that Graeber, a committed anarchist—an exponent not of anarchy but of anarchism, the idea that people can get along perfectly well without governments—asked throughout his career. The Dawn of Everything is framed by an account of what the authors call the “indigenous critique.” In a remarkable chapter, they describe the encounter between early French arrivals in North America, primarily Jesuit missionaries, and a series of Native intellectuals—individuals who had inherited a long tradition of political conflict and debate and who had thought deeply and spoke incisively on such matters as “generosity, sociability, material wealth, crime, punishment and liberty.”
The Indigenous critique, as articulated by these figures in conversation with their French interlocutors, amounted to a wholesale condemnation of French—and, by extension, European—society: its incessant competition, its paucity of kindness and mutual care, its religious dogmatism and irrationalism, and most of all, its horrific inequality and lack of freedom. The authors persuasively argue that Indigenous ideas, carried back and publicized in Europe, went on to inspire the Enlightenment (the ideals of freedom, equality, and democracy, they note, had theretofore been all but absent from the Western philosophical tradition). They go further, making the case that the conventional account of human history as a saga of material progress was developed in reaction to the Indigenous critique in order to salvage the honor of the West. We’re richer, went the logic, so we’re better. The authors ask us to rethink what better might actually mean.

Monday, July 24, 2023

The evolution of transhuman forms - a MindBlog paragraph edited by GPT implementations

In this post I am documenting the final result of passing a paragraph I wrote on future human evolution through a number of different GPT (generative pre-trained transformers). This is to archive the results of this exercise for myself, and in case it might be of interest to a few MindBlog readers. The bots were prompted by a request to re-write or edit the paragraph to make it more accessible to a reader with a college education.

The bottom line is that the edited versions with the most depth and coherence were obtained using Open AI’s ChatGPT-4.  Bard.google.com, Google Workspace Labs, and Microsoft’s Bing, respectively, provided increasingly less useful versions.   Here is my original paragraph:

The linguistic capabilities of generative pre-trained transformers based on silicon  - the same capabilities that have enabled human civilization  - will soon exceed those of humans. We have brought our understanding of what human beings as they now exist are about to a plateau level at which further increments of understanding will become increasingly small. The cream has been skimmed, the best cherries have been picked, and it seems appropriate to turn our  attention now to studying the evolution of trans human forms utilizing LLMs that are better able to manage themselves and the planet.  The action is in the the pretraining P of GPT, as the models move beyond their initial human management to the stage of evolving themselves, hopefully into benign versions of Darwin’s  “forms most beautiful and most wonderful.” What is the point of humans other than being self replicating machines that evolve into further forms, hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Can they evolve into trans human minds (or mind, singular)?
Here is the result of including some of the edits suggested by ChatGPT4’s regenerated response, a  vers. 3 of the paragraph above:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon surpass those of humans.

Our understanding of human nature as it currently exists has reached a plateau level, at which further increments of understanding are becoming increasingly small. The cream has been skimmed, the best cherries have been picked. Consequently it seems the right time to shift our focus toward studying the evolution of 'transhuman' forms. These are beings that could potentially leverage advanced language models (LLMs) to better self-regulate themselves and manage the planet.

The real crux of the matter lies in the pretraining phase of these transformers (symbolized by 'P' in GPT). This is when the models are initially developed and fine-tuned. As they evolve beyond their human programming the optimistic vision is that they might transform into benign versions of what Charles Darwin called “forms most beautiful and most wonderful.”

From a philosophical perspective, one might ask: What is the ultimate purpose of human beings, if not to serve as self-replicating entities that evolve into more advanced forms - hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Is it possible for humans to evolve into a collective of transhuman minds or even a singular, transcendent mind?  These are questions worth exploring as we stand on the brink of an AI-enhanced future.


Monday, July 17, 2023

MindBlog's reading list.

I've decided to pass on links to articles I have found worthwhile reading , realizing that I am not going to have time to frame their ideas into longer posts because I'm speading more time now at my Steinway B's keyboard than at my computer's keyboard. If you encounter a paywall with any of the links, you might try entering the URL at https://archive.is/.

An installment of Venkatesh Rao’s newsletter: The permaweird narrative 

Jaron Lanier “There is no A.I.” in The New Yorker  

Human Beings Are Soon Going to Be Eclipsed’ David Brooks in The New York Times commenting on Douglas Hofstadter's  recent ideas.  

Marc Andreessen offers a horrific commentary titled "Fighting" on Elon Musk challenging Mark Zuckerberg to a cage fight.  

Learning from history. Archeological evidence that early hierarchical or authoritarian cultures didn't persist as long as more cooperative eqalitarian ones.  

Arthur Brooks on "The illusion of explanatory depth", an installment in his series "How to build a life.""  

Potential anti-aging therapy.  One sample of the effusive outpouring of new ideas and widgets offered by New Atlas.

 

 

 

Friday, June 23, 2023

Why AI Will Save The World

I want to pass on the following text of a Marc Adreeessen substack post. This is because my usual extracting of small clips of text to give a sense of the whole just wasn't doing its job. I think that all of this sane article should be read by anyone interested in AI. Andreessen thinks that AI is quite possibly the most important – and best – thing our civilization has ever created, and that the development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future. I would urge you to have patience with the article's arguments against the conventional wisdom regarding the dangers of AI, even if they seem jarring and overstated to you.

Why AI Will Save The World

The era of Artificial Intelligence is here, and boy are people freaking out.

Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.

An even shorter description of what AI could be: A way to make everything we care about better.

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.

  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.

  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.

  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.

  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.

  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.

  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.

  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.

The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.

We should be living in a much better world with AI, and now we can.

So Why The Panic?

In contrast to this positive view, the public conversation about AI is presently shot through with hysterical fear and paranoia.

We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.

What explains this divergence in potential outcomes from near utopia to horrifying dystopia?

Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI.

Now, it is certainly the case that many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.

But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

And wow do we have a full-blown moral panic about AI right now.

This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good.

But are they?

And are they right or wrong?

The Baptists And Bootleggers Of AI

Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories – “Baptists” and “Bootleggers” – drawing on the historical example of the prohibition of alcohol in the United States in the 1920’s:

  • “Baptists” are the true believer social reformers who legitimately feel – deeply and emotionally, if not rationally – that new restrictions, regulations, and laws are required to prevent societal disaster.

    For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society.

    For AI risk, these actors are true believers that AI presents one or another existential risks – strap them to a polygraph, they really mean it.

  • “Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.

    For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.

    For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition – the software version of “too big to fail” banks.

A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.

The problem with the Bootleggers is that they win. The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want – regulatory capture, insulation from competition, the formation of a cartel – and the Baptists are left wondering where their drive for social improvement went so wrong.

We just lived through a stunning example of this – banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was coopted by the Bootleggers – the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.

So in practice, even when the Baptists are genuine – and even when the Baptists are right – they are used as cover by manipulative and venal Bootleggers to benefit themselves. 

And this is what is happening in the drive for AI regulation right now.

However, it isn’t sufficient to simply identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.

AI Risk #1: Will AI Kill Us All?

The first and original AI doomer risk is that AI will decide to literally kill humanity.

The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods. Later, Mary Shelley gave us moderns our own version of this myth in her novel Frankenstein, or, The Modern Prometheus, in which we develop the technology for eternal life, which then rises up and seeks to destroy us. And of course, no AI panic newspaper story is complete without a still image of a gleaming red-eyed killer robot from James Cameron’s Terminator films.

The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies – fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most – all? – new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.

My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.

Specifically, I think three things are going on:

First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III – with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.”)

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.

Third, California is justifiably famous for our many thousands of cults, from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death.

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.

It turns out that this type of cult isn’t new – there is a longstanding Western tradition of millenarianism, which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult. From Wikipedia, with additions by me:

“Millenarianism is the belief by a group or movement [AI risk doomers] in a coming fundamental transformation of society [the arrival of AI], after which all things will be changed [AI utopia, dystopia, and/or end of the world]. Only dramatic events [AI bans, airstrikes on datacenters, nuclear strikes on unregulated AI] are seen as able to change the world [prevent AI] and the change is anticipated to be brought about, or survived, by a group of the devout and dedicated. In most millenarian scenarios, the disaster or battle to come [AI apocalypse, or its prevention] will be followed by a new, purified world [AI bans] in which the believers will be rewarded [or at least acknowledged to have been correct all along].”

This apocalypse cult pattern is so obvious that I am surprised more people don’t see it.

Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV. But their extreme beliefs should not determine the future of laws and society – obviously not.

AI Risk #2: Will AI Ruin Our Society?

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful”, to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Short version: If the murder robots don’t get us, the hate speech and misinformation will.

This is a relatively recent doomer concern that branched off from and somewhat took over the “AI risk” movement that I described above. In fact, the terminology of AI risk recently changed from “AI safety” – the term used by people who are worried that AI would literally kill us – to “AI alignment” – the term used by people who are worried about societal “harms”. The original AI safety people are frustrated by this shift, although they don’t know how to put it back in the box – they now advocate that the actual AI risk topic be renamed “AI notkilleveryoneism”, which has not yet been widely adopted but is at least clear.

The tipoff to the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.

As it happens, I have had a front row seat to an analogous situation – the social media “trust and safety” wars. As is now obvious, social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”. 

My big learnings from the social media wars are:

On the one hand, there is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real world violence, that are nearly universally agreed to be off limits – legal or not – by virtually every society. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.

On the other hand, the slippery slope is not a fallacy, it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place – for example, for hate speech, a specific hurtful word, or for misinformation, obviously false claims like “the Pope is dead” – a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. They will do this up to and including in ways that are nakedly felony crimes. This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.

And so this is the dynamic that has formed around “AI alignment” now. Its proponents claim the wisdom to engineer AI-generated speech and thought that are good for society, and to ban AI-generated speech and thoughts that are bad for society. Its opponents claim that the thought police are breathtakingly arrogant and presumptuous – and often outright criminal, at least in the US – and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now, I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win.

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom. Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different” – this is the time it will finally happen, this is the technology that will finally deliver the hammer blow to human labor. And yet, it never happens. 

We’ve been through two such technology-driven unemployment panic cycles in our recent past – the outsourcing panic of the 2000’s, and the automation panic of the 2010’s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades that mass unemployment was near, by late 2019 – right before the onset of COVID – the world had more jobs at higher wages than ever in history.

Nevertheless this mistaken idea will not die.

And sure enough, it’s back.

This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

These are such mindblowing ideas for people who have not been exposed to them that it may take you some time to wrap your head around them. But I swear I’m not making them up – in fact you can read all about them in standard economics textbooks. I recommend the chapter The Curse of Machinery in Henry Hazlitt’s Economics In One Lesson, and Frederic Bastiat’s satirical Candlemaker’s Petition to blot out the sun due to its unfair competition with the lighting industry, here modernized for our times.

But this time is different, you’re thinking. This time, with AI, we have the technology that can replace ALL human labor.

But, using the principles I described above, think of what it would mean for literally all existing human labor to be replaced by machines.

It would mean a takeoff rate of economic productivity growth that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop across the board to virtually zero. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand.

Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral up to a material utopia that neither Adam Smith or Karl Marx ever dared dream of. 

We should be so lucky.

AI Risk #4: Will AI Lead To Crippling Inequality?

Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.

The classic example of this was Elon Musk’s so-called “secret plan” – which he naturally published openly – for Tesla in 2006:

Step 1, Build [expensive] sports car

Step 2, Use that money to build an affordable car

Step 3, Use that money to build an even more affordable car

…which is of course exactly what he’s done, becoming the richest man in the world as a result.

That last point is key. Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.

In short, everyone gets the thing – as we saw in the past with not just cars but also electricity, radio, computers, the Internet, mobile phones, and search engines. The makers of such technologies are highly motivated to drive down their prices until everyone on the planet can afford them. This is precisely what is already happening in AI – it’s why you can use state of the art generative AI not just at low cost but even for free today in the form of Microsoft Bing and Google Bard – and it is what will continue to happen. Not because such vendors are foolish or generous but precisely because they are greedy – they want to maximize the size of their market, which maximizes their profits.

So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value. As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen.

Marx was wrong then, and he’s wrong now.

This is not to say that inequality is not an issue in our society. It is, it’s just not being driven by technology, it’s being driven by the reverse, by the sectors of the economy that are the most resistant to new technology, that have the most government intervention to prevent the adoption of new technology like AI – specifically housing, education, and health care. The actual risk of AI and inequality is not that AI will cause more inequality but rather that we will not allow AI to be used to reduce inequality.

AI Risk #5: Will AI Lead To Bad People Doing Bad Things?

So far I have explained why four of the five most often proposed risks of AI are not actually real – AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause an ruinous increase in inequality. But now let’s address the fifth, the one I actually agree with: AI will make it easier for bad people to do bad things.

In some sense this is a tautology. Technology is a tool. Tools, starting with fire and rocks, can be used to do good things – cook food and build houses – and bad things – burn people and bludgeon people. Any technology can be used for good or bad. Fair enough. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no question.

This causes some people to propose, well, in that case, let’s not take the risk, let’s ban AI now before this can happen. Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code.

The AI cat is obviously already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and there are outstanding open source implementations proliferating by the day. AI is like air – it will be everywhere. The level of totalitarian oppression that would be required to arrest that would be so draconian – a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? – that we would not have a society left to protect.

So instead, there are two very straightforward ways to address the risk of bad people doing bad things with AI, and these are precisely what we should focus on.

First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use. QED.

But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.

There are already many smart people in and out of government doing exactly this, of course – but if we apply all of the effort and brainpower that’s currently fixated on the futile prospect of banning AI to using AI to protect against bad people doing bad things, I think there’s no question a world infused with AI will be much safer than the world we live in today.

The Actual Risk Of Not Pursuing AI With Maximum Force And Speed

There is one final, and real, AI risk that is probably the scariest at all:

AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.

“We win, they lose.”

Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.

We should seek to win the race to global AI technological superiority and ensure that China does not.

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision.

What Is To Be Done?

I propose a simple plan:

  • Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.

  • Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.

  • Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.

  • To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.

  • To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.

And that is how we use AI to save the world.

It’s time to build.

Legends and Heroes

I close with two simple statements.

The development of AI started in the 1940’s, simultaneous with the invention of the computer. The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943. Entire generations of AI scientists over the last 80 years were born, went to school, worked, and in many cases passed away without seeing the payoff that we are receiving now. They are legends, every one.

Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.