Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Friday, July 19, 2024

MindBlog does a survey of futurists and recommends a few.

In the wild and wooly world of thinking about futures for the earth, its living forms, and humanity there exists a tier of well know public intellectuals. I prompted 5 different GPT engines (Google Gemini, ChatGPT 4o, Anthropic Claude, Perplexity, Microsoft Copilot) to "Please act as an author who has chosen a list of the 10 most influential futurists - public intellectuals who speculate about the future of humanity. Write a brief essay describing each of them and and their basic messages"

The names selected by more than one of the GPT engines were: Ray Kurzweil (5), Michio Kaku (5), Yuval Harari (4), Alvin Toffler (4), Nick Bostrom (3), Aubrey de Grey (3), William Gibson (2), Faith Popcorn (2), and Paul Roberts (2). The names chosen by only one of the engines were Peter Schwartz, Juan Enriquez, Elon Musk, Peter Diamandis, Kevin Kelly, Jane McGonigal, James Lovelock, Ari Wallach, Buckminster Fuller, Stewart Brand, Jacque Fresco, Donna Haraway, Hans Rosling, Paul Roberts, Jaron Lanier. Christopher Alberg, Dirk Helbing, and John Naisbitt. 

My own favorite futurist has been Venkatesh Rao, a brilliant polymath whose ideas I have passed on in numerous MindBlog posts (see Ribbonfarm for his narratives) . He is one of a cohort of younger thinkers ~50 years old  generating their own newsletters and websites that I have stumbled onto recently. These include Canadian Patrick Tanguary (Sentiers newsletterSamuel Arbesman's site and its 'lists of lists',  Parag Khanna and Jason Crawford (Roots of Progress).  I would encourage MindBlog readers to check out a few of these links. 

Wednesday, July 17, 2024

An essay on Geopolitics and Globalization

I want to pass on one complete section, indicating titles of the others, of  a larger essay addressing tribalism versus global consciousness.

Untimely Meditations

Philosophy goes against the grain of political immediacy to insist on inconvenient truths.


Nathan Gardels, Noema  Magazine Editor-in-Chief

The Condition of Planetarity

The Politics of Planetary Realism

Conceptual demolition of the outmoded paradigm of nation-state realism, however, does not erase its still firmly rooted expression in present practice. Building the centripetal momentum toward binding planetary association against the weighty centrifugal pull of tribal identity is an endeavor as fraught as it is necessary.

While technology and advancing science may foster a universal understanding of the planetary condition, politics and culture have a different logic rooted in emotion and ways of life cultivated among one’s own kind. Far from moving ahead in lock step, when they meet, they clash.

Indeed, the great paradox today is that the planetary imperative of mitigating climate change has become the province of renewed nationalism. Industrial policies designed to make the green energy transition are competing to protect and promote national self-interest instead of joining together at the level of all humanity. What we see instead is the battle of subsidies between the U.S. Inflation Reduction Act and the European Green New Deal, with both raising stiff tariff barriers to blunt China’s concerted conquest of core green technologies from storage batteries to electric vehicles, solar cells and their supply chains.

In short, rather than uniting as a threatened species to meet a challenge that knows no boundaries, competition has sidelined collaboration within the West while global warming has been weaponized in the new Cold War with China.

The new element of a greenlash, registered in the recent European elections, portends social resistance that is more about resentment of self-righteous Tesla-driving elites and the unequally borne costs of the energy transition than climate denial. The Diet-Coke imaginary of environmentalists — who sold climate policies as achievable without undue burdens on economies built around fossil fuels for more than a century — has been put to rest. As the heavy lift of the transition bites ever more deeply into the daily bread, we are learning the hard lesson that the future has a scarce constituency in consumer democracies as well as growth-oriented autocracies.

In this sense, “planetary realism” takes on a double meaning. It entails both a recognition of the interdependence of the planetary condition as well as a realistic grasp of what it will take to navigate through what remains a world of nations.

Immigration And Belonging

Becoming Universal

Friday, June 28, 2024

How AI will transform the physical world.

I pass on the text of wild-eyed speculations by futurist Ray Kurzweil recently sent to The Economist, who since 1990 has been writing on how soon “The Singularity” - machine intelligence exceeding human intelligence - will arrive and transform our physical world in energy, manufacturing and medicine.  I have too many reservation about realistic details of  his fantasizing to even begin to list them, but the article is a fun read:

By the time children  born today are in kindergarten, artificial intelligence (AI) will probably have surpassed humans at all cognitive tasks, from science to creativity. When I first predicted in 1999 that we would have such artificial general intelligence (AGI) by 2029, most experts thought I’d switched to writing fiction. But since the spectacular breakthroughs of the past few years, many experts think we will have AGI even sooner—so I’ve technically gone from being an optimist to a pessimist, without changing my prediction at all.

After working in the field for 61 years—longer than anyone else alive—I am gratified to see AI at the heart of global conversation. Yet most commentary misses how large language models like ChatGPT and Gemini fit into an even larger story. AI is about to make the leap from revolutionising just the digital world to transforming the physical world as well. This will bring countless benefits, but three areas have especially profound implications: energy, manufacturing and medicine.

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, AI can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s GNoME AI discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter AGI finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.

Energy abundance enables another revolution: in manufacturing. The costs of almost all goods—from food and clothing to electronics and cars—come largely from a few common factors such as energy, labour (including cognitive labour like R&D and design) and raw materials. AI is on course to vastly lower all these costs.

After cheap, abundant solar energy, the next component is human labour, which is often backbreaking and dangerous. AI is making big strides in robotics that can greatly reduce labour costs. Robotics will also reduce raw-material extraction costs, and AI is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.

These advanced manufacturing capabilities will allow the price-performance of computing to maintain the exponential trajectory of the past century—a 75-quadrillion-fold improvement since 1939. This is due to a feedback loop: today’s cutting-edge AI chips are used to optimise designs for next-generation chips. In terms of calculations per second per constant dollar, the best hardware available last November could do 48bn. Nvidia’s new B200 GPUs exceed 500bn.

As we build the titanic computing power needed to simulate biology, we’ll unlock the third physical revolution from AI: medicine. Despite 200 years of dramatic progress, our understanding of the human body is still built on messy approximations that are usually mostly right for most patients, but probably aren’t totally right for you. Tens of thousands of Americans a year die from reactions to drugs that studies said should help them.

Yet AI is starting to turn medicine into an exact science. Instead of painstaking trial-and-error in an experimental lab, molecular biosimulation—precise computer modelling that aids the study of the human body and how drugs work—can quickly assess billions of options to find the most promising medicines. Last summer the first drug designed end-to-end by AI entered phase-2 trials for treating idiopathic pulmonary fibrosis, a lung disease. Dozens of other AI-designed drugs are now entering trials.

Both the drug-discovery and trial pipelines will be supercharged as simulations incorporate the immensely richer data that AI makes possible. In all of history until 2022, science had determined the shapes of around 190,000 proteins. That year DeepMind’s AlphaFold 2 discovered over 200m, which have been released free of charge to researchers to help develop new treatments.

Much more laboratory research is needed to populate larger simulations accurately, but the roadmap is clear. Next, AI will simulate protein complexes, then organelles, cells, tissues, organs and—eventually—the whole body.

This will ultimately replace today’s clinical trials, which are expensive, risky, slow and statistically underpowered. Even in a phase-3 trial, there’s probably not one single subject who matches you on every relevant factor of genetics, lifestyle, comorbidities, drug interactions and disease variation.

Digital trials will let us tailor medicines to each individual patient. The potential is breathtaking: to cure not just diseases like cancer and Alzheimer’s, but the harmful effects of ageing itself.

Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When AGI gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, AI-driven therapies that are expensive at first will quickly become widely available.

This is AI’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings. ■

Wednesday, June 26, 2024

Off the rails - inequity and unfairness built into capitalism

I have largely withdrawn from posting items relevant to the details of our current political and social malaise, but I want to pass on a few clips from a piece by Brett Stevens, that passes on points made by Ruchir Sharma, the chairman of Rockefeller International and a Financial Times columnist, in his new book “What Went Wrong With Capitalism.” Sharma makes a convincing case that hits the nail on the head about what has gotten us where we are: easy money, or ultralow interest. When the price of borrowing money is zero, everything goes bonkers.
In 2010, as the era of ultralow and even negative interest rates was getting started, the median sale price for a house in the United States hovered around $220,000. By the start of this year, it was more than $420,000.
Inflation is seen in global financial markets:
..In 1980 they were worth a total of $12 trillion — equal to the size of the global economy at the time. After the pandemic...those markets were worth $390 trillion, or around four times the world’s total gross domestic product.
In theory, easy money should have broad benefits for regular people, from employees with 401(k)s to consumers taking out cheap mortgages. In practice, it has destroyed much of what used to make capitalism an engine of middle-class prosperity in favor of the old and very rich.
First, there was inflation in real and financial assets, followed by inflation in consumer prices, followed by higher financing costs as interest rates have risen to fight inflation...for Americans who rely heavily on credit, it’s been devastating...
...he system is broken and rigged, particularly against the poor and the young. “A generation ago, it took the typical young family three years to save up to the down payment on a home,” Sharma observes in the book. “By 2019, thanks to no return on savings, it was taking 19 years.”
The social consequence of this is rage; the political consequence is populism.
For all their policy differences, both leading U.S. candidates are committed and fearless statists, not friends of competitive capitalism.”
What happens when both major parties are wedded to two versions of the same failing ideas? And what happens when leading figures of both the progressive left and the populist right seek to compound the problem with even easier credit and more runaway spending?
The answer: We are wandering in fog. And the precipice is closer than we think.

Wednesday, June 19, 2024

Managing the human herd

 This post is a dyspeptic random walk through thoughts triggered by the front page photograph of the Wall Street journal of June 17, 2024, showing Masses of pilgrims embarked on a symbolic stoning of the devil in Saudi Arabia under the soaring summer heat. Such enormous mobs of people are those most easily roused to strong emotions by charismatic speakers.

How are the emotions and behaviors of such enormous clans of humans be regulated in a sane and humane way? Can this be accomplished outside of authoritarian or oligarchical governance? Might such governance establish its control of the will and moods of millions through the unnoticed infiltration of AI into all aspects of their daily life (cf. Apple's recent AI announcements). Will the world come to be ruled by a "Book of Elon"? 

Or might we be moving into a world of decentralized everything? a bottom up emergence of consensus governance the from the mosh pit of web 3, cryptocurrencies and stablecoins? The noble sentiments of the Etherium Foundation/ notwithstanding, the examples we have to date of 'rules of the commons' are the chaos of Discord, Reddit, and other social media where the sentiments of idiots and experts jumble together in an impenetrable cacophony.  

Who or what is going to emerge to control this mess? How long will the "permaweird" persist?  


Monday, June 17, 2024

Empty innovation: What are we even doing?

I came across an interesting commentary by "Tante" on innovation, invention, and progress (or the lack thereof) in the constant churning, and rise and fall, of new ideas and products in the absence of questions like "Why are we doing this?" and "Who is profiting?". In spite of the speaker's arrogance and annoying style, I think it is worth a viewing.

Friday, June 14, 2024

The future of life.

I want to pass on this science magazine review of Jamie Metzl's new book "Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World". Metzel is founder of the One Shared World organization. Check out its website here.
On the night of 4 July 1776, the Irish immigrant and official printer to the Continental Congress John Dunlap entered his Philadelphia print-shop and began to typeset the first printed version of a document that was to become the enduring North Star of the “American experiment.” It comprised an ideological handbook for its utopian aspirations and a codification of purported essential self-evident ground truths that included the equality of all men and the rights to life, liberty, and the pursuit of happiness. By the morning, Dunlap had produced an estimated 200 copies of the American Declaration of Independence, which Abraham Lincoln would later refer to as a “rebuke and a stumblingblock… to tyranny and oppression.”
In his erudite, optimistic, and timely book Superconvergence, the futurist Jamie Metzl laments the lack of any such authoritative reference to inform our exploration of an equally expansive, intriguing, and uncharted territory: humankind’s future. Replete with unprecedented opportunities and existential risks hitherto unimaginable in life’s history, the new world we are entering transcends geographical boundaries, and—as a result of humankind’s global interdependencies—it must, by necessity, exist in a no-man’s-land beyond the mandates of ideologies and nation-states. Its topography is defined not by geological events and evolution by natural selection so much as by the intersection of several exponential human-made technologies. Most notably, these include the generation of machine learning intelligence that can interrogate big data to define generative “rules” of biology and the post- Darwinian engineering of living systems through the systematic rewriting of their genetic code.
Acknowledging the intrinsic mutability of natural life and its ever-changing biochemistry and morphology, Metzl is unable to align himself with UNESCO’s 1997 Universal Declaration on the Human Genome and Human Rights. To argue that the current version of the human genome is sacred is to negate its prior iterations, including the multiple species of human that preceded us but disappeared along the way. The sequences of all Earth’s species are in a simultaneous state of being and becoming, Metzl argues. Life is intrinsically fluid.
Although we are still learning to write complex genomes rapidly, accurately, without sequence limitation, and at low cost, and our ability to author novel genomes remains stymied by our inability to unpick the generative laws of biology, it is just a matter of time before we transform biology into a predictable engineering material, at which point we will be able to recast life into desired forms. But while human-engineered living materials and biologically inspired devices offer potential solutions to the world’s most challenging problems, our rudimentary understanding of complex ecosystems and the darker sides of human nature cast long shadows, signaling the need for caution.
Metzl provides some wonderful examples of how artificial species and bioengineering, often perceived as adversaries of natural life, could help address several of the most important issues of the moment. These challenges include climate change, desertification, deforestation, pollution (including the 79,000-metric-ton patch of garbage the size of Alaska in the Pacific Ocean), the collapse of oceanic ecosystems, habitat loss, global population increase, and the diminution of species biodiversity. By rewriting the genomes of crops and increasing the efficiency of agriculture, we can reduce the need to convert additional wild habitats into farmland, he writes. Additionally, the use of bioengineering to make sustainable biofuels, biocomputing, bio foodstuffs, biodegradable plastics, and DNA information–storing materials will help reduce global warming.
Meanwhile, artificial intelligence (AI) can free up human time. By 2022, DeepMind’s AlphaFold program had predicted the structures of 214 million proteins—a feat that would have taken as long as 642 million years to achieve using conventional methods. As Metzl comments, this places “millions of years back into the pot of human innovation time.” The ability to hack human biology using AI will also have a tremendous impact on the human health span and life span, not least through AI-designed drugs, he predicts.
Metzl is right when he concludes that we have reached a “critical moment in human history” and that “reengineered biology will play a central role in the future of our species.” We will need to define a new North Star—a manifesto for life—to assist with its navigation. Metzl argues for the establishment of a new international body with depoliticized autonomy to focus on establishing common responses to shared global existential challenges. He suggests that this process could be kick-started by convening a summit aimed at establishing aligned governance guidelines for the revolutionary new technologies we are creating.

Monday, May 13, 2024

How the US is destroying young people's future.

When I wake up in the morning, I frequently remind myself to be grateful for the luck of having been born in 1942, and being able to ride the crest of a number of fortunate external circumstances that made my generation vastly better off than those who followed. I was in high school in the late 50s when Sputnik happened, fueling a huge increase in federal research funding that, powered my laboratory research career how our vision works. Both my parents and myself were clients of state governments or universities that offered generous retirement plans and pensions, and and the ability to set aside tax deferred money to invest for later years.

This has lead to the situation succinctly described in the following TED video done by Scott Galloway, who teaches at NYU, transcript of talk is here.  (It was sent to me by my 49 year old son, an senior E-commerce digital solutions architect, whose expectations about the future are vastly more modest than mine were when I was his age.) One of the most striking graphics in the video shows how the increase in household wealth of those 70 and older has increased by 11% since 1989, while it has decreased by 5% for those under 40. 


Monday, May 06, 2024

Are we the cows of the future?

One of the questions posed by Yuval Harari in his writing on our possible futures is "What are we to do with all these humans who are, except for a small technocratic elite, no longer required as the means of production?" Esther Leslie, a professor of political aesthetics at Birkbeck College, University of London, does an essay on this issue, pointing out that our potential futures in the pastures of digital dictatorship — crowded conditions, mass surveillance, virtual reality — are already here. You should read her essay, and I passon just a few striking clips of text:

...Cows’ bodies have historically served as test subjects — laboratories of future bio-intervention and all sorts of reproductive technologies. Today cows crowd together in megafarms, overseen by digital systems, including facial- and hide-recognition systems. These new factories are air-conditioned sheds where digital machinery monitors and logs the herd’s every move, emission and production. Every mouthful of milk can be traced to its source.
And it goes beyond monitoring. In 2019 on the RusMoloko research farm near Moscow, virtual reality headsets were strapped onto cattle. The cows were led, through the digital animation that played before their eyes, to imagine they were wandering in bright summer fields, not bleak wintry ones. The innovation, which was apparently successful, is designed to ward off stress: The calmer the cow, the higher the milk yield.
A cow sporting VR goggles is comedic as much as it is tragic. There’s horror, too, in that it may foretell our own alienated futures. After all, how different is our experience? We submit to emotion trackers. We log into biofeedback machines. We sign up for tracking and tracing. We let advertisers’ eyes watch us constantly and mappers store our coordinates.
Could we, like cows, be played by the machinery, our emotions swayed under ever-sunny skies, without us even knowing that we are inside the matrix? Will the rejected, unemployed and redundant be deluded into thinking that the world is beautiful, a land of milk and honey, as they interact minimally in stripped-back care homes? We may soon graze in the new pastures of digital dictatorship, frolicking while bound.
Leslie then describes the ideas of German philosopher and social critic Theodor Adorno:
Against the insistence that nature should not be ravished by technology, he argues that perhaps technology could enable nature to get what “it wants” on this sad earth. And we are included in that “it.”...Nature, in truth, is not just something external on which we work, but also within us. We too are nature.
For someone associated with the abstruseness of avant-garde music and critical theory, Adorno was surprisingly sentimental when it came to animals — for which he felt a powerful affinity. It is with them that he finds something worthy of the name Utopia. He imagines a properly human existence of doing nothing, like a beast, resting, cloud gazing, mindlessly and placidly chewing cud.
To dream, as so many Utopians do, of boundless production of goods, of busy activity in the ideal society reflects, Adorno claimed, an ingrained mentality of production as an end in itself. To detach from our historical form adapted solely to production, to work against work itself, to do nothing in a true society in which we embrace nature and ourselves as natural might deliver us to freedom.
Rejecting the notion of nature as something that would protect us, give us solace, reveals us to be inextricably within and of nature. From there, we might begin to save ourselves — along with everything else.
(The above is a repost of MindBlog's 1/7/21 post)

Wednesday, April 10, 2024

The world of decentralized everything.

Following up on my last post on the Summer of Protocols sessions, I want to pass on (again, to my future self, and possibly a few techie MindBlog readers) a few links to the world of decentralized grass roots everything - commerce, communications, finance, etc.  - trying to bypass the traditional powers and gate keepers in these areas by constructing distributed systems usually based on block chains and cryptocurrencies.  I am trying to learn more about this, taking things in small steps to avoid overload headaches... (One keeps stumbling on areas of world wide engagement of thousands of very intelligent minds.)

Here is a worthwhile read of the general idea from the Ethereum Foundation.

I've described getting into one decentralized context by setting up a Helium Mobile network hotspot, as well as my own private Helium Mobile Cellular account. To follow this up, I pass on a link in an email from Helium pointing to its participation in Consensus24 May 29-31 in Austin TX (where I now live) sponsored by CoinDesk.  At look at the agenda for that meeting gives you an impression of the multiple engagements of government regulatory agencies, business, and crypto-world that are occurring.

Wednesday, March 20, 2024

Fundamentally changing the nature of war.

I generally try to keep a distance from 'the real world' and apocalyptic visions of what AI might do, but I decided to pass on some clips from this technology essay in The Wall Street Journal that makes some very plausible predictions about the future of armed conflicts between political entities:

The future of warfare won’t be decided by weapons systems but by systems of weapons, and those systems will cost less. Many of them already exist, whether they’re the Shahed drones attacking shipping in the Gulf of Aden or the Switchblade drones destroying Russian tanks in the Donbas or smart seaborne mines around Taiwan. What doesn’t yet exist are the AI-directed systems that will allow a nation to take unmanned warfare to scale. But they’re coming.

At its core, AI is a technology based on pattern recognition. In military theory, the interplay between pattern recognition and decision-making is known as the OODA loop— observe, orient, decide, act. The OODA loop theory, developed in the 1950s by Air Force fighter pilot John Boyd, contends that the side in a conflict that can move through its OODA loop fastest will possess a decisive battlefield advantage.

For example, of the more than 150 drone attacks on U.S. forces since the Oct. 7 attacks, in all but one case the OODA loop used by our forces was sufficient to subvert the attack. Our warships and bases were able to observe the incoming drones, orient against the threat, decide to launch countermeasures and then act. Deployed in AI-directed swarms, however, the same drones could overwhelm any human-directed OODA loop. It’s impossible to launch thousands of autonomous drones piloted by individuals, but the computational capacity of AI makes such swarms a possibility.

This will transform warfare. The race won’t be for the best platforms but for the best AI directing those platforms. It’s a war of OODA loops, swarm versus swarm. The winning side will be the one that’s developed the AI-based decision-making that can outpace their adversary. Warfare is headed toward a brain-on-brain conflict.

The Department of Defense is already researching a “brain-computer interface,” which is a direct communications pathway between the brain and an AI. A recent study by the RAND Corporation examining how such an interface could “support human- machine decision-making” raised the myriad ethical concerns that exist when humans become the weakest link in the wartime decision-making chain. To avoid a nightmare future with battlefields populated by fully autonomous killer robots, the U.S. has insisted that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike.

But will our adversaries show similar restraint? Or would they be willing to remove the human to gain an edge on the battlefield? The first battles in this new age of warfare are only now being fought. It’s easy to imagine a future, however, where navies will cease to operate as fleets and will become schools of unmanned surface and submersible vessels, where air forces will stand down their squadrons and stand up their swarms, and where a conquering army will appear less like Alexander’s soldiers and more like a robotic infestation.

Much like the nuclear arms race of the last century, the AI arms race will define this current one. Whoever wins will possess a profound military advantage. Make no mistake, if placed in authoritarian hands, AI dominance will become a tool of conquest, just as Alexander expanded his empire with the new weapons and tactics of his age. The ancient historian Plutarch reminds us how that campaign ended: “When Alexander saw the breadth of his domain, he wept, for there were no more worlds to conquer.”

Elliot Ackerman and James Stavridis are the authors of “2054,” a novel that speculates about the role of AI in future conflicts, just published by Penguin Press. Ackerman, a Marine veteran, is the author of numerous books and a senior fellow at Yale’s Jackson School of Global Affairs. Admiral Stavridis, U.S. Navy (ret.), was the 16th Supreme Allied Commander of NATO and is a partner at the Carlyle Group.


Monday, January 08, 2024

The Importance of Not Being Earnest

I pass on some of the paragraphs from Rao's latest piece to archive them  for myself here on MindBlog, and to make them available to other readers:

For my purposes, I will define earnestness as being helplessly locked into a single way of looking at what you’re doing, unaware of other ways.

I suspect there are only a few known and culturally familiar modes of being non-earnest…I think they are humor, irony, and surrealism. I’d guess humor is at least as old as civilization and possibly as old as life. Irony proper seems like an outgrowth of early modern conditions. Surrealism is the newest and youngest mode, barely a century old. I think this potted history is fun, but I won’t insist upon it. Maybe there are more modes, and maybe they appeared in a different sequence, or were all always-already present.

Here’s the core of my argument: the more complex the circumstances, the more dangerous it is to inhabit them from a single perspective; ie earnestly. The only really good reason to do so is when dealing with small children or deeply traumatized adults who both need some earnestness in their environment to feel safe.

The importance of non-earnestness is evident even in the “simple” task of chopping vegetables. If you’re doing that for more than 15 minutes, you’ll likely get bored, and start to get sloppy and careless. Creative multi-modal engagement with chopping vegetables — seeing shapes perhaps, or noting colors and textures with an artist’s eye — keeps you mindfully absorbed for longer, more robustly.

In your brain there are two basic modes — mind wandering, sustained by the default-mode network, and focus, sustained by the task-positive network — and my assertion is that they should work together like a clock escapement, unfolding as little micro-fugues of fancy that depart from and return to a base literal mode, and trace out a kind of strange-attractor orbit around the nominal behavior. Something like this is visible at even more basic levels: A healthy heart exhibits high HRV (heart-rate variability). Fitness trackers use HRV as the primary indicator of cardiovascular health. Low variability is a mark of poor health or disease.

Now apply that same principle to complex, large-scale systems and problems. Can you afford to be on-the-nose earnest in thinking about them? Are humor, irony, and surrealism optional extras?

The more complex the circumstances, the more dangerous it is to act in ways that are entailed by only a single perspective. Such action is fragile and degenerate. Robust action contains multitudes. It contains obliquities that harbor strategic depth. It contains tempo variations that encode unsuspected depths of adjacent informational richness.

Action must be richer than thought, because phenomenology is always richer than any single theorization. Earnestness — action confined to the imagination of one theory of itself — is behavioral impoverishment. Non-earnestness is proof of richness. Proof of life.

There is more than one way of looking at complex systems, and action within a complex system must make sense in more than one way. There must be more than one categorical scheme through which an unfactored reality can be viewed and justified.

I think we’re currently caught between the retreat of irony and the advent of surrealism.

Unless you’ve been living under a rock, you’ve probably noticed that the last decade has been marked by a broad and intense backlash against irony, the dominant mode of non-earnestness between 1989-2010 or so (I think humor dominated the 70s and 80s). Now, after a transient decade of various sorts of unstable forays into deadening collective earnestness, it feels like we’re shifting en masse to a dominantly surreal mode.

I’ve decided to approach 2024 with a surreal orientation. I don’t quite know what the hell that means yet, but I plan to fuck around and find out.

Humor would be nice to have in what’s already shaping up to a joyless year, and irony will provide, as it always does, some solace in the darkest, most joyless depths of it. But the workhorse modality is going to be surrealism. Beat-by-beat, breath-by-breath, the creativity of our responses to the year is going to be shaped by our ability to repeatedly escape into the adjacent impossible, and from that vantage point spot the germs of new possibilities. You cannot jailbreak the future from tyranny of the past without stepping outside of both.

It is hard to escape the thought that we are going to be unsurprisingly unlucky as a planet in 2024, with few and uncertain bright prospects to alleviate the general gloom. We are going to end up with a cognitively compromised geriatric as US President by December. We are going to let two bloody wars grind on. We are going to see weaponized AI compound myriad miseries.

If there is serendipity —surprising luck — to be found in 2024, it will be found and nurtured at the micro level. By people who understand what it means to chop vegetables non-earnestly, and escape the tyranny of the real with every breath and stroke. By people who are not too scared of life to stubbornly resist the temptations of humor, irony, and surrealism in service of the idiot gods of authenticity and earnestness.

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.


Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

Google's Bard's response:
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.


ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.


Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Friday, September 22, 2023

This is the New 'Real World'

For my own later reference, and hopefully of use to a few MindBlog readers,  I have edited, cut and pasted, and condensed from 3960 to 1933 words the latest brilliant article generated by Venkatesh Rao at

The word world, when preceded by the immodest adjective real, is a self-consciously anthropocentric one, unlike planet, or universe. To ask, what sort of world do we live in invites an inherently absurd answer when we ponder what kind of world we live in. but if enough people believe in an absurd world, absurd but consequential histories will unfold. And consequentiality, if not truth, perhaps deserves the adjective real. 

Not all individual worlds that in principle contribute to the real world are equally consequential… A familiar recent historical real world, the neoliberal world, was shaped more by the beliefs of central bankers than by the beliefs of UFO-trackers. You could argue that macroeconomic theories held by central bankers are not much less fictional than UFOs. But worlds built around belief in specific macroeconomic theories mattered more than ones built around belief in UFOs. In 2003 at least, it would have been safe to assume this  - it is no longer a safe assumption in 2023.

Of the few hundred  consciously shared worlds like religions, fandoms, and nationalisms that are significant, perhaps a couple of dozen matter strongly, and perhaps a dozen matter visibly, the other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.

This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives…The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.

The fact that some worlds survive a brutal winnowing process does not alter the fact that they remain anthropocentric is/ought conceits … A world that has made the cut to significance and consequentiality, to the level of mattering, must still survive its encounters with material, as opposed to social realities... For all the consequential might of the Catholic Church in the 17th century, it was Galileo’s much punier Eppur si muove world that eventually ended up mattering more. Truth eventually outweighed short-term consequentiality in the enduring composition of real.

It would take a couple of centuries for Galileo’s world to be counted among the ones that mattered, in shaping the real world. And the world of the Catholic Church, despite centuries of slow decline still matters..It is just that the real world has gotten much bigger in scope, and other worlds constituting it, like the one shaping the design of the iPhone 15, matter much more.

…to answer a question like what sort of world do we live in? is to craft an unwieldy composite portrait out of the dozen or so constituent worlds that matter at any given time …it is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing… the “real” world is not necessarily any more real than private fantasies. It is merely vastly more consequential — for a while.

When “the real world” goes away because we’ve stopped believing in it, as tends to happen every few decades, it can feel like material reality itself, rather than a socially constructed state of mind, has come undone. And we scramble to construct a new real world. It is a necessary human tendency. Humans need a real world to serve as a cognitive “outdoors” (and escape from “indoors”), even if they are not eternal or true. A shared place we can accuse each other of not living in, and being detached from…Humans will conspire to cobble together a dozen new fantasies and label it real world, and you and I will have to live in it too.

So it is worth asking the question, what sort of world do we live in? And it is worth actually constructing the answer, and giving it the name the real world, and using it to navigate life — for a while.

So let’s take a stab at it.

The real world of the early eighties was one defined by the boundary conditions of the Cold War, an Ozone hole, PCs, video games, Michael Jackson, a pre-internet social fabric, and no pictures of Uranus, Neptune, Pluto, or black holes shaping our sense of the place of our planet within the broader cosmos.

The real world that took shape in the nineties, the neoliberal world to which Margaret Thatcher declared there is no alternative (TINA), was one defined by the rise of the internet, unipolar geopolitics, the economic ascent of China, The Simpsons, Islamic terrorism, and perhaps most importantly, a sense of politics ceasing to matter against the backdrop of an unstoppable increase in global prosperity.

That real world began to wobble after 9/11, bust critical seams during the Great Recession, and started to go away in earnest after 2015, in the half-decade, which ended with the pandemic. The passing of the neoliberal world was experienced as a trauma across the world, even by those who managed to credibly declare themselves winners.

What has taken shape in the early 20s defies a believable characterization as real, for winners and losers alike. Declaring it weird  studiously avoids assessments of realness. Some, like me, go further and declare the world to be permaweird…the weirdness is here to stay.

Permaweird does not mean perma-unreal. The elusiveness of a “New Normal” does not mean no “New Real” can emerge, out of new, and learnable, patterns of agency and consequentiality…the forces shaping the New Real are becoming clear. Here is a list off the top of my head. It should be entirely unsurprising.

1 Energy transition
2 Aging population
3 Weird weather
4 Machine learning
5 Memefied politics
6 The slowing of Moore’s Law
7 Meaning crises (plural)
8 Stagnation of the West
9 Rise of the Rest
10 Post-ZIRP economics
11 Post-Covid supply chains
12 Climate refugee movements

You will notice that none the forces on the list is particularly new or individually very weird. What’s weird is the set as a whole, and the difficulty of putting them together into a notion of normalcy.

Forces though, are not worlds. We may trade in our gasoline-fueled cars for EVs, but we do not inhabit “the energy transition” the way we inhabit a world-idea like “neoliberalism” or “religion.”

Sometimes forces directly translate into consequential worlds. In the 1990s, the internet was a force shaping the real world, and also created a world — the inhabitable world of the very online — that was part of the then-emerging sense of “real.”

Sometimes forces indirectly create worlds. Low-interest rates created another important constituent world of the Old Real …Vast populations in liberal urban enclaves lived out ZIRPy lifestyles, eating their avocado toast, watching TED talks, riding sidewalk scooters, producing “content”, and perversely refusing to be rich enough to buy homes.

Something similar appears to be happening in response to the force of post-ZIRP economics. The public internet, dominated by vast global melting-pot platforms featuring vast culture wars, appears to be giving way to a mix of what I’ve called cozyweb enclaves and protocol media,…This world too, will be positioned to consequentially shape the New Real as strongly as the very online world shaped the Old Real.

I won’t try to provide careful arguments here, or justify my speculative inventory of forces, but here is my list of resulting worlds being carved out by them, which I have arrived at via a purely impressionistic leap of attempted synthesis. Together, these worlds constitute the New Real:

1 Climate refugee world
2 Disaster world (the set of places currently experiencing disaster conditions)
3 Dark Forest online world
4 Death-star world (centered on the assemblage of spaces controlled by declining wealth or power concentrations)
5 Non-English civilizational worlds (including Chinese and Indian)
6 Weird weather worlds
7 Non-institutional world (including, but not limited to, free-agent and blockchain-based worlds)
8 Trad Retvrn LARP world
9 Retiree world
10 Silicon realpolitik world
11 AI-experienced world
12 Resource-localism world (set of spaces shaped by a dominant scarce resource like energy or water)

These worlds are worlds because it is possible to imagine lifestyles entirely immersed in them. They are consequential worlds because each already has enough momentum and historical leverage to reshape the composite understanding of real. What climate refugees do in climate refugee world will shape what all of us do in the real world.

World 4 is worth some elaboration. In it I include almost everything that dominates current headlines and feels “real,” including spaces dominated by billionaires, governments, universities, and traditional media. Yet, despite the degree to which it dominates the current distribution of attention, my sense is that it has only a small and diminishing role to play in defining the New Real. When we use the phrase in the real world in the coming decade, we will not mainly be referring to World 4.

World 11 is also worth some elaboration. One reason I believe weirdness is here to stay is that the emerging ontologies of the New Real are neither entirely human in origin, nor likely to respect human desires for common-sense conceptual stability in “reality.

For the moment, AIs inhabit the world on our terms, relating to it through our categories. But it is already clear that they are not restricted to human categories, or even to categories expressible within human languages. Nor should they be, if we are to tap into their powers. They are limited by human ontology only to the extent that their presence in the world must be mediated by humans. … they will definitely evolve in ways that keep the real world permaweird.

Can we slap on a usefully descriptive short label onto the New Real, comparable to “Neoliberal World” or “Cold War World”?  

There is no such obviously dominant eigenvector of consequentiality in the New Real, but the most obvious candidate is probably global warming. So we might call the New Real the warming world. Somehow though, it doesn’t feel like warming shapes our experience of realness as clearly as its predecessors. Powerful though the calculus of climate change is, it operates via too many subtle degrees of indirection to shape our sense of the real. Still, I’ll leave the phrase there for your consideration.

An idiosyncratic personal candidate … is magic-realist world. A world that is consequentially real and permaweird is a world that feels magical and real at the same time, and is sustainably inhabitable: but only if you let go a craving for a sense of normalcy.

It offers unprecedented, god-like modes of agency that are available for almost anyone to exercise…The catch is this — attachment to normalcy equals learned helplessness in the face of all this agency. If you want to feel normal, almost none of the magical agency is available to you. An attachment to normalcy limits you to mere magical thinking, in the comforting company of an equally helpless majority. If you are willing to live with a sense of magical realism, a great deal more suddenly opens up.

This, I suspect, is the flip side of the idea that “we are as gods, and might as well get good at it.” There is no normal way to feel like a god. A magical being must necessarily experience the world as a magical doing. To experience the world as permaweird, is to experience it as a god.

This is not necessarily an optimistic thought. A real world, shaped by god-like humans, each operating by an idiosyncratic sense of their own magical agency, is not necessarily a good world, or a world that conjures up effective collective responses to its shared planetary problems.

But it is a world that does something, rather than nothing, and that’s a start.

Monday, August 21, 2023

Never-Ending Stories - a survival tactic for uncertain times

I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading.  I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival.  I post the clips here for my own future use, also thinking it might interest some MindBlog readers:

Never-Ending Stories

Marching beat-by-beat into a Purposeless infinite horizon

During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:

Take it one day at a time

Take it one step at a time

Sleep on it; morning is wiser than evening

Count to ten

Or even just breathe

All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.

These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.

There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.

So why is this common advice? And is it good advice?

I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.

Protocol Narratives, Narrative Protocols

Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:

A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.

A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.

Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.

This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.

Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.

Protocols and Purpose

In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes. Let’s distinguish two kinds of motive force in any narrative:

1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.

2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.

In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.

What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.

There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”

The Protocol-Purpose Tradeoff

During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.

The question remains though. When are such assumptions valid?

This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.

Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.

As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.

In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.

This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here. 

To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).

How would you use it?

This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.

The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.


A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”

I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.

I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.

So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.


Step-by-Step is Hard-to-Kill

While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.

The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.

The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.

I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.

Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”

Protocol narratives only need functions to keep going.

They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.

While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).

During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.

I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.

This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.

To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.