Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Thursday, March 14, 2024

An inexpensive Helium Mobile 5G cellphone plan that pays you to use it?

This is a followup to the previous post describing my setting up a G5 hotspot on Helium’s decentralized 5G infrastructure that earns MOBILE tokens. The cash value of the MOBILE tokens earned since July 2022 is  ~7X the cost of the equipment needed to generate them.

Now I want to put down further facts I want to document for my future self and MindBlog’s techie readers.

Recently Helium has introduced Helium Mobile, a cell phone plan using using this new 5G infrastructure which costs $20/month - much less expensive than other cellular providers like Verizon and ATT.  It has partnered with T-Mobile to fill in coverage areas its own 5G network hasn’t reached.

Nine days ago I downloaded the Helium Mobile app onto my iPhone 12 and set up an account with an eSIM and a new phone number, alongside my phone number of many years now in a Verizon account using a physical SIM card.  

My iPhone has been earning MOBILE tokens by sharing its location to allow better mapping of the Helium G5 network.  As I am writing this, the app has earned 3,346 Mobile tokens that could be sold and converted to $14.32 at this moment (the price of MOBILE, like other cryptocurrencies, is very volatile).

If this earning rate continues (a big ‘if’), the cellular account I am paying $20/month for will be generating MOBILE tokens each month worth ~$45. The $20 monthly cell phone plan charge can be paid with MOBILE tokens, leaving $15/month passive income from my subscribing to Helium Mobile and allowing anonymous tracking of my phone as I move about.  (Apple sends a message every three days asking if I am sure I want to be allowing continuous tracking by this one App.)

So there you have it.  Any cautionary notes from techie readers about the cybersecurity implications of what I am doing would be welcome.  
 

Wednesday, March 13, 2024

MindBlog becomes a G5 cellular hotspot in the the low-priced ‘People’s Cell Phone Network’ - Helium Mobile

I am writing this post, as is frequently the case, for myself to be able to look up in the future, as well as for MindBlog techie readers who might stumble across it. It describes my setup of a G5 hotspot in the new Helium G5 Mobile network. A post following this one will describe my becoming a user of this new cell phone network by putting the Helium Mobile App on my iPhone using an eSIM.

This becomes my third post describing my involvement in the part of the crypto movement seeking to 'return power to the people.' It attempts to bypass the large corporations that are the current gate keepers and regulators of commerce and communications, and who are able to assert controls that are more in their own self interests and profits more than the public good. 

The two previous posts (here and here) describe my being seduced into crypto-world  by my son's having made a six hundred-fold return on investment by virtue of being one of the first cohort (during the "genesis" period) to put little black boxes and antennas on their window sills earning HNT (Helium blockchain tokens) using  LoRa 868 MHz antennas transmitting and receiving in the 'Internet of Things." I was a latecomer, and in the 22 months since June of 2022 have earned ~$200 on an investment of ~$500 of equipment. 

Helium next came up with the idea of setting up its own 5G cell phone network, called Helium Mobile. Individual Helium 5G Hotspots (small cell phone antennas) use Citizens Broadband Radio Service (CBRS) Radios to provide cellular coverage like that provided by telecom companies' more expensive networks of towers (CBRS is a wide broadcast 3.5Ghz band in the United States that does not require a spectrum license for use.)

In July of 2022, I decided to set up the Helium G5 hot spot equipment shown in the picture below to be in the genesis period for the establishment of this new Helium G5 cellular network.  I made my Abyssinian cat named Martin shown in front of the Bobber 500 miner the system administrator. The G5 antenna seen on the sill in the middle of window views ~170 degree of the southern sky. 

This system cost ~$2,500 and by early March 2024 has earned ~4.3 Million MOBILE tokens worth ~$18,000. As in a Ponzi scheme, most of the rewards are from the Genesis period, March 2024 earnings are ~ $45/week.  If this rate of earnings persists, this represents an annual ROI (return on investment)l of ~ 100%

The writing above is mine, and I decided just for the heck of it to recruit the assistance of several GPT engines (Google Gemini, Open AI ChatGPT-4, Anthropic Claude, and Microsoft Bing)  with the following prompt: 

Could you please act as Deric Bownds writing a post for his Blog, Deric’s MindBlog at http://mindblog.dericbownds.net. Describe my setting up of a Helium G5 hot spot using the Bobber 500 miner with FreedomFi CBRS and an internal antenna during the genesis period for the establishment of the new Helium G5 cellular network. 

The responses of Gemini and Bing were excited, poetic, lyrical, overwrought hyperbole, very much in the spirit of the cyber-world pitches one sees online. ChatGPT-4 (the best response) and Claude did toss up some interesting paragraphs, and below I pass on some of ChatGPT-4's slightly edited and redundant text. I would be more sanguine about all of hype over 'democratic participation (or at least my participation)' if I had not been completely flummoxed by the cacophony of  the Helium Discord Discord Server, YouTube videos, and the complexity of the HIP (Helium Improvement Protocol) votes that ratify new changes and improvements to the system.  

The Helium Network, known for pioneering decentralized wireless infrastructure, has embarked on an ambitious journey to extend its reach into 5G, leveraging the power of blockchain technology to create a community-powered network. The initiative is ambitious, aiming to democratize cellular connectivity, making it more accessible and affordable. This endeavor is not just about technology; it's about reimagining the fabric of our digital society.
This genesis period of the Helium G5 network represented a fascinating intersection of technology, psychology, and community. The network's decentralized nature challenges traditional models of infrastructure ownership and control, proposing a more democratic and participatory framework. It prompts us to reflect on our relationship with technology: Are we mere consumers, or can we also be contributors and custodians of the digital landscapes we inhabit?
As more individuals and communities join this initiative, setting up their hotspots and expanding the network, a unique form of collective intelligence emerges. This network, powered by the people, for the people, embodies a new kind of digital ecosystem—one that values collaboration over competition, and shared success over isolated gains.
The implications for our cognitive landscapes are profound. By democratizing access to 5G connectivity, we are not just enhancing our ability to communicate; we are potentially altering the way we think, learn, and engage with the world around us. The Helium G5 network invites us to imagine a future where connectivity is as natural and essential as the air we breathe, a future where our digital and physical realities are seamlessly intertwined.
The technology we embrace today shapes the world we live in tomorrow. Through initiatives like the Helium G5 network, we have the opportunity to craft a digital environment that reflects our highest aspirations for society—one that is open, inclusive, and fundamentally human.
In this journey of connectivity, we are all contributors, shaping the digital ethos of our era. The genesis of the Helium G5 network is not just a technical milestone; it is a call to action, inviting us to rethink our role in the digital age and to contribute to a network that embodies our collective values and aspirations.

The next post describes my becoming a customer of the new Helium Mobile cellular service plan costing $20/month.


 

Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Monday, February 26, 2024

The "enjunkification" of our online lives

I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article. 

Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”

The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.

...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.

The second article I mention is Jon Caramanica's:  "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:

The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.


 

 

Friday, February 16, 2024

An agent-based vision for scaling modern AI - Why current efforts are misguided.

I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:

…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.


…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching


What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights


…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.

Scaling Precedents from Biology

There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.

Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….

If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.

The Road to Muddledom

Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).

Muddling Doctrines

How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.

Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.

Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.

Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.

Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.


The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics

The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.

As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.

Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.

Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.

When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.

Beyond Training Determinism

…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.

…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.

Muddling Through vs. Godding Through

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.

To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling.

..in thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.

This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.

Protocols for Massed Muddling

How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)

When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).

While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.

How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.

Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.

…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.

…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.

This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.

This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.

Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.

…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

And then we can have a whole new culture war about whether that’s a good thing.

Monday, February 05, 2024

Functional human brain tissue produced by layering different neuronal types with 3D bioprinting

A very important advance by Su-Chun Zhang and collaborators at the University of Wisconsin that moves studies of nerve cells connecting in nutrient dishes from two to three dimensions:  

Highlights

  • Functional human neural tissues assembled by 3D bioprinting
  • Neural circuits formed between defined neural subtypes
  • Functional connections established between cortical-striatal tissues
  • Printed tissues for modeling neural network impairment

Summary

Probing how human neural networks operate is hindered by the lack of reliable human neural tissues amenable to the dynamic functional assessment of neural circuits. We developed a 3D bioprinting platform to assemble tissues with defined human neural cell types in a desired dimension using a commercial bioprinter. The printed neuronal progenitors differentiate into neurons and form functional neural circuits within and between tissue layers with specificity within weeks, evidenced by the cortical-to-striatal projection, spontaneous synaptic currents, and synaptic response to neuronal excitation. Printed astrocyte progenitors develop into mature astrocytes with elaborated processes and form functional neuron-astrocyte networks, indicated by calcium flux and glutamate uptake in response to neuronal excitation under physiological and pathological conditions. These designed human neural tissues will likely be useful for understanding the wiring of human neural networks, modeling pathological processes, and serving as platforms for drug testing.
 

 


Friday, February 02, 2024

Towards a Metaphysics of Worlds

I have a splitting headache from having just watched a 27 minute long YouTube rapid fire lecture by Venkatesh Rao, given last November at the Autonomous Worlds Assembly in Istanbul (part of DevConnect, a major Ethereum ecosystem event).  His latest newsletter “Towards a Metaphysics of Worlds” gives adds some notes and context, and gives a link  to its slides. As Rao notes:

“This may seem like a glimpse into a very obscure and nerdy subculture for many (most?) of you, but I think something very important and interesting is brewing in this scene and more people should know about it.”

I would suggest that you to skip the YouTube lecture and cherry pick your way through his slides.  Some are very simple and quite striking, clearly presenting interesting ideas about the epistomology, ontology, and definitions of worlds.  Here is Slide 11, where what Rao means by "Worlds" is made more clear:

Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”

***************** 

In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.

*****************

It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Monday, December 25, 2023

Large Language Models are not yet providing theories of human language.

 From Dentella et al. (open source):

Significance
The synthetic language generated by recent Large Language Models (LMs) strongly resembles the natural languages of humans. This resemblance has given rise to claims that LMs can serve as the basis of a theory of human language. Given the absence of transparency as to what drives the performance of LMs, the characteristics of their language competence remain vague. Through systematic testing, we demonstrate that LMs perform nearly at chance in some language judgment tasks, while revealing a stark absence of response stability and a bias toward yes-responses. Our results raise the question of how knowledge of language in LMs is engineered to have specific characteristics that are absent from human performance.
Abstract
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess human-like language abilities; hence, they are expected to emulate this behavior by providing both stable and accurate answers, when asked whether a string of words complies with or deviates from their next-word predictions. This work tests whether stability and accuracy are showcased by GPT-3/text-davinci-002, GPT-3/text-davinci-003, and ChatGPT, using a series of judgment tasks that tap on 8 linguistic phenomena: plural attraction, anaphora, center embedding, comparatives, intrusive resumption, negative polarity items, order of adjectives, and order of adverbs. For every phenomenon, 10 sentences (5 grammatical and 5 ungrammatical) are tested, each randomly repeated 10 times, totaling 800 elicited judgments per LM (total n = 2,400). Our results reveal variable above-chance accuracy in the grammatical condition, below-chance accuracy in the ungrammatical condition, a significant instability of answers across phenomena, and a yes-response bias for all the tested LMs. Furthermore, we found no evidence that repetition aids the Models to converge on a processing strategy that culminates in stable answers, either accurate or inaccurate. We demonstrate that the LMs’ performance in identifying (un)grammatical word patterns is in stark contrast to what is observed in humans (n = 80, tested on the same tasks) and argue that adopting LMs as theories of human language is not motivated at their current stage of development.

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.

 

Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

 
Google's Bard's response:
 
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.

*********

ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.

 



Sunday, December 03, 2023

A flash of clarity on what current LLMs can and can not do. An AI apocalypse does not appear to be eminent...

In his most recent newsletter, Venkatesh Rao pulls up a twitter thread he wrote in 2017 making what he calls an ontological distinction between  boundary intelligence and interior intelligence.  This was before transformers like GPT-1 began to attract more attention. The distinction Rao makes is central to understanding what current large language models (LLMs) can and can not do. Here is his unedited text from 2017:
 
1. I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence

2. Boundary intelligence is how you deal with information flows across the boundary of your processing abilities 

3. Interior intelligence is how you process information. Includes logic, emotional self-regulation, etc.

4. A thesis I've been converging on is that boundary intelligence is VASTLY more consequential once interior intelligence exceeds a minimum

5. Boundary intelligence is by definition meta, since you're tuning your filters and making choices about what to even let hit your attention

6. I think it is highly consequential because almost all risk management happens via boundary intelligence (blindspots, black swans etc)

7. Interior intelligence is your poker skill and strategy. Boundary intelligence is picking which table to sit down at

8. Interior intelligence is reading a book competently, extracting insights and arguments. Boundary intelligence is picking books to read. 

9. Interior intelligence is being a good listener. Boundary intelligence is deciding whom to listen to. 

10. Basically, better input plus mediocre IQ beats bad input and genius IQ every time, so boundary intelligence is leverage

11. And obviously, boundary intelligence is more sensitive to context. The noisier and angrier info streams get, the more BI beats II

12. Most of boundary intelligence has to do with input intelligence, but output intelligence becomes more important with higher agency 

13. Output intelligence is basically the metacognition around when/where/how/to-whom/why to say or do things you are capable of saying/doing

14. We think a lot about external factors in decisions, but output intelligence is about freedom left after you've dealt with external part

Next, from the abstract of a forthcoming paper by Yadlowsky et al. Rao extracts the following:

…when presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks. Together our results highlight that the impressive ICL abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.

And then, in the following selected clips, continues his text:

Translated into the idiom from the fourteen points above, this translates into “It’s all interior intelligence, just within a very large boundary.” There is no boundary intelligence in current machine learning paradigms. There isn’t even an awareness of boundaries; just the ability to spout statements about doubt, unknowns, and boundaries of knowability; a bit like a blind person discussing color in the abstract.

This is not to say AI cannot acquire BI. In fact, it can do so in a very trivial way, through embodiment. Just add robots around current AIs and let them loose in real environments.

The reason people resist this conclusion is is irrational attachment to interior intelligence as a sacred cow (and among computer science supremacists, a reluctance to acknowledge the relevance and power of embodiment and situatedness in understandings of intelligence). If much of the effectual power of intelligence is attributable to boundary intelligence, there is much less room for sexy theories of interior intelligence. Your (cherished or feared) god-like AI is reduced to learning through FAFO (Fuck around and find out) feedback relationships with the rest of the universe, across its boundary, same as us sadsack meatbag intelligences with our paltry 4-GPU-grade interior intelligence.

In their current (undoubtedly very impressive) incarnation, what we have with AI is 100% II, 0% BI. Human and animal intelligences (and I suspect even plant intelligences, and definitely evolutionary process intelligence) are somewhere between 51-49 to 99.9-0.1% BI. They are dominated to varying degrees by boundary intelligence. Evolutionary processes are 100% BI, 0% II.

Monday, November 27, 2023

The feasibility of artificial consciousness through the lens of neuroscience

Some interesting perspectives from Aru, Larkum, and Shine in Trends in Neurosciences. Motivated readers can obtain a copy of the article's text from me.  

Highlights

Large language models (LLMs) can produce text that leaves the impression that one may be interacting with a conscious agent.
Present-day LLMs are text-centric, whereas the phenomenological umwelt of living organisms is multifaceted and integrated.
Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing. Currently, such processes are not implemented in LLMs.
The organizational complexity of living systems has no parallel in present-day AI tools. Possibly, AI systems would have to capture this biological complexity to be considered conscious.
LLMs and the current debates on conscious machines provide an opportunity to re-examine some core ideas of the science of consciousness.
Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Sunday, November 26, 2023

Religious wars in the tech industry.

In this post I continue to use MindBlog as my private reference library to put down essential ideas I want to hang onto by extracting relevant clips, in this case from the most recent essay by Venkatesh Rao. Here are the clips:

Unless you’ve been hiding under a rock, you’ve probably heard something about the short but dramatic saga that unfolded at OpenAI over the last week…The Open AI saga doesn’t yet have a name, but I am calling it EAgate, after Effective Altruism or EA, one of the main religions involved in what was essentially an early skirmish in a brewing six-way religious war that looks set to last at least a decade…Not just for the AI sector, but for all of tech…We are not just unwilling to talk to perceived ideological adversaries, we are unable to do so; their terms of reference for talking about things feel so not-even-wrong, we are reduced to incredulous stares.

Incredulous stares are an inarticulate prelude to more consequential hostilities. Instead of civil or uncivil debate, or even talking past each other, we are reduced to demanding that others acquire literacy in our own religious discourses and notions of sacredness before even verbal hostilities can commence…actual engagement across mutually incompatible religious mental models has become impossible.

Want to criticize EA in terms that can even get through to them? You’d better learn to talk in terms of “alignment,” “orthogonality thesis,” “instrumental convergence,” and “coherent extrapolated volition” before they’ll even understand what you’re saying, let alone realize you’re making fun of them, or bother to engage in ritual hostilities with you.

Want to talk to the accelerationists? Be prepared to first shudder in theatrical awe at literal aliens and new life taking birth before us. You’re not capable of such allegorically overwrought awe? Trot out the incredulous stare.

Want to talk to the woke crowd? Be prepared to ignore everything actually interesting about the technology and talk in pious sermons about decolonization and bias in AI models. You’re not? Well, trot out the incredulous stare.

Want to talk to me? You’d better get up to speed on oozification, artificial time, mediocre computing, Labatutian-Lovecraftian-Ballardian cycles, and AI-crypto convergence. My little artisan religion is not among the big and popular ones precipitating boardroom struggles, but it’s in the fray here, and will of course prove to be the One True Faith. You’re not willing to dive into my profound writings on my extended universe of made-up concepts? Feel free to direct an incredulous stare at me and move on.

It’s not that there’s no common ground. Everyone agrees GPUs are important, Nvidia’s CUDA (Compute Unified Device Architecture) is evil, and that there are matrix multiplications going on somewhere. The problem is the part that is common ground is largely disconnected from the contentious bits.

In such a situation, we typically dispense with debates, hostile or otherwise, and skip right to active warfare. Religious warfare is perhaps continuation of incredulous staring by other means. Such as boardroom warfare where the idea of destroying the org is a valid option on the table, bombing datacenters suspected of harboring Unaligned GPUs (which some religious extremists have suggested doing), and in the future, perhaps actual hot wars.

Why do I think we are we entering a religious era? It’s a confluence of many factors, but the three primary ones, in my opinion, are: a) The vacuum of meaning created by the unraveling of the political landscape, b) the grand spectacle a dozen aging tech billionaires performing their philosopher-king midlife crises in public, and c) finally, the emergence of genuinely startling new technologies that nobody has yet successfully managed to wrap their minds around, not even the Charismatic Great Men from whom we have become accustomed to taking our cues.

The Six Religions

Here’s my list of primary religions, along with the specific manifestations in the events of EAgate… there are significant overlaps and loose alliances that can be mistaken for primary religions …as well as a long tail of more esoteric beliefs in the mix that aren’t really consequential yet.

The religion of Great Man Adoration (GMA): Represented in EAgate by the cult of personality that was revealed to exist, attached to Sam Altman.

The religion of Platform Paternalism (PP): Represented in EAgate by Microsoft and in particular the speak-softly-and-carry-a-big-stick leadership style of Satya Nadella.

The religion of Rationalism: Represented by the Effective Altruism (EA) movement. EA represented (and continues to represent) a particular millenarian notion of “AI safety” focused on the “X-risk” of runaway God-like AIs.
 
The religion of Accelerationism: Often referred to as e/acc (for Effective Accelerationism), initially an ironic/satirical response to EA that first emerged as a genre of memes a few years ago.
 
The religion of wokeness: Mostly on the sidelines for EAgate, it did appear briefly in a post-credits scene, as competing priesthoods briefly circled the question of the future of OpenAI’s new and too-small board.

The religion of neopaganism: Built around a “small gods” polytheistic vision of the future of AI, fueled by open-source models and cheap, commodity hardware once we’re past the current Nvidia-controlled GPU near-monopoly, this religion … is clearly helping shape the multi-faceted moral panic that is EA.

Why do I call these currents of thought religions, rather than merely contending political ideologies, such as those that featured in the culture wars of the last decade?

The reason is that all are shaped by their unique responses to fundamentally new phenomena being injected into the world by technology. These responses are about technology qua technology. …. Ordinary political interests, while present, are secondary.

The simmering religious wars of today are about the nature and meaning of emerging technologies themselves. And not just technologies with a retail presence like AI, crypto, and climate tech. It is no accident that geopolitics today is warily circling the TSMC fabs in Taiwan. Sub-3nm semiconductor manufacturing is yet another mysterious technological regime…

The technological revolutions are real even if the first responses lack the poetry and philosophical sophistication we have come to expect.

What comes next? As we get tired of holding each other in incredulous gazes, most of us will return to our chosen native religions to make sense of the unfolding reality.

Sunday, November 19, 2023

AI is infusing our daily lives. Is it an existential threat?

As AI is being subtly melded into our daily lives by the Google, Apple, and Microsoft clouds there is continuing debate over whether it is or Isn’t an existential threat. This debate has been brought to a head by the sudden and unexpected firing of Open AI’s chief executive Sam Altman and has starkly exposed the philosophical rift in the AI community between those who are more or less concerned about the potential risks of its untrammeled development. How many, if any, guardrails should be in place?. I have been on the techno-optimists side of this debate and passed on Marc Andreesen's rather extreme and overstated SubStack post titled "Why AI Will Save the World" in a June 23 MindBlog post. I was then completely horrified by his subsequent piece by this ideologist of the Silicon Valley elite titled "Fighting", which took a very positive view of Elon Musk challenging Mark Zuckerbert to a cage fight. I was going to write a piece on his dystopian future world, but two NYTimes pieces have done a better job at it than I could. They are Elizabeth Spiers "A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World" and Ezra Klein's "The Chief Ideologist of the Silicon Valley Elite Has Some Strange Ideas".

Friday, November 10, 2023

Personalized chatbots and wearable AI devices for melding AI with our everyday lives

Personalized GPTs are here, but apparently not yet at the level of consumers like myself who don't want to do coding. I've been looking into making my own personal chatbot whose training data is 50 MB of my personal journal of about 20,000 pages. I would then like to query and chat with the resulting chatbot about events in my life since 1980. If anyone reading this has some advice, let me know!

Below are some URLs to my morning's reading:

These links are to articles from Kevin Roose, NYTimes tech columnist:  

Personalized A.I. Agents Are Here. Is the World Ready for Them?

Kevin Roose Hard Fork podcast (transcript available) on this and other AI topics 

And this article is on AI devices seeking to cut into smartphone dominance 

It points to two such devices: The Humane Ai Pin and The Rewind AI Pendant 

Monday, October 16, 2023

Using AI to find retinal biomarkers for patient sex that opthalmologists can be trained to see.

Delavari et al. do a demonstration of using AI to find aspects of human retina images that identify whether they are male or female retinas, a distinction that had not previously been accomplished by clinical opthalmologists. Here is their abstract:
We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (⁠N=26⁠) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (⁠N=31⁠), after training, their performance increased significantly (⁠p<0.001⁠, d=2.63⁠). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.

Monday, October 02, 2023

Fluid Fogs and Fixed Flows

For the MindBlog readers who may have have noted my  "This is the New 'Real World' post and followed me down the rabbit hole of Ventkatesh Rao’s narrative worlds, I pass on the following abstracting  of his 9/23/2023  installment, titled “Fluid Fogs and Fixed Flows,”  which reduces its length by half. I have done this mainly for my own use, to facilitate my future recall of his ideas:

Worlds and Real World

To briefly recap last week’s essay, I’m using world and real world in the sense of last week’s essay: A world is a coherent, immersive, totalizing subjectivity you can inhabit, as a sort of cognitive indoors. The real world is the corresponding outdoors — the messy union of the dozen or so most consequential worlds in existence at any given time.

The process by which the real world emerges, as a negotiation among worlds, is one that makes it qualitatively different. In brief, regular worlds are finite games, while the real world is the infinite game.

Weirdness, Fog, and Unnarratability

The relationship between weirdness, brain fog, and unnarratability is something like the relationship between a crisis, the panic it induces, and the solvability of the crisis.

World-brain fog affects those in a given world. Real-world-brain fog affects everybody. For us individual sentient elements of these world-brains, this fog manifests as the spectacle of history becoming incoherent.

Fog vs. Flow

The opposite of brain fog is flow. When thoughts flow easily, clearly, and tastefully, from one idea to the right next idea...Where the one-step-at-a-time ethos I identified earlier in this series as the essence of never-ending stories is not just all you can live by, it’s all you need to live by.

To be clear, I’m not saying fog is bad and flow is good. That would be like saying clear weather is good and storms are bad. This is just a pair of opposed subjective cognitive conditions. Setting aside valuative judgments, my claim is that the real-world-brain is in a state of fog rather than flow right now.

To say that the real world is suffering from world-brain fog is to say that the infinite game is in narrative trouble, regardless of the prevailing calculus of winners and losers in the constituent finite games. The question of winners and losers is distinct from the question of whether the game can be continued. The real world being foggy means it is hard to see how to continue the game, whether you’re currently winning or losing any specific game.

Okay, so that’s the big thesis here: history feels far more unnarratable than it needs to, because the real world is suffering from world-brain fog. If we can get rid of this fog, we’ll still have to deal with the objective realities of the permaweird, but that might be an easier task.

Individual Fogs

To think through the idea of a foggy real-world brain, it’s useful to think about the more familiar individual-brain phenomenon.

I’ll use myself as an example to analyze these factors...Looking back 10 years at my 2013 archives, 2023’s output of words feels like a congealed sludge by comparison. ..The sludginess of 2023 seems to afflict all words being produced by everybody.

In the last couple of years, this god-like situation awareness of the broad currents of my own thought has become dissipated, fragmented, and yes, foggy. I often forget obvious connections I’ve made in the past, or worse, fail to make them in the first place. Sometimes I forget entire trails of thought I’ve gone down, over multiple essays. Sometimes I clumsily re-think thoughts I’ve previously thought in more elegant ways. There is no sense of a longer compounding journey unfolding over years and millions of words. Instead, there is a sense of a random walk comprising individual short outings of a few thousand words. When the fog is at its worst, the 2 million words seem like so much rubble.

So my individual brain fog in the sense of such missed connections and missed opportunities for emergence is bad for the kind of thinking and writing I do. The fog/flow pair is neutral, but for certain kinds of activity, such as thinking and writing in hypertext-augmented ways, fog is very bad. Just like literal fog is very bad for ships trying to navigate treacherous waters.

The largest fraction of the value of writing online in a densely internally linked way³ lies in the larger structures of thought that emerge. I’ve previously jokingly referred to my personal instance of this as the “extended ribbonfarm blogamatic universe,” but more seriously and generally, we might call these personal protocol narratives. It’s a particular way of being in the infinite game, one step at a time, that’s available to writers. Anyone who writes primarily online, with a lot of nonlinear internal hyperlinking, has a personal protocol narrative by default. Traditional writers usually don’t, unless they work extra hard for it⁴ (something I'm too lazy to do, which makes me think in a non-internet world, I wouldn’t be a writer).

This superpower is the reason people like me eschew traditional media and book-writing and largely stick to blogs, microblogs, and newsletters. Not only is the emergent landscape the bulk of the value, it is the main enabling factor in its own creation. I can write in certain ways because I have this evolving canvas doing most of the work. If this emergent landscape of thought starts to disappear, the whole thing falls apart.

And while hypertext is a powerful brain-augmentation technology, it can’t defend against all cognitive afflictions. In particular, brain fog is deadly. It weakens your ability to make new internal links, and as a result makes the connected landscape less connected, and therefore both less useful, and less usable. Brain fog drives a vicious cycle of degeneration towards a more primitive textuality. At some point, I might have no technical advantage at all over book-writing cavemen or even typewriter-wielding Neanderthals.

Entangled Fogs

Some technologies are simply foggier than others…mail newsletter platforms are much foggier than blogs…blogs simply want to create rich internal linking..I use an order of magnitude fewer links in newsletters than in blog posts. I know this because I still retain stronger gestalt memories of my blog archives than my newsletter archives.

Biology and technology conspire to create brain fog in messy ways. When I got Covid a year ago, and experienced a few months of a more biological style brain fog, writing in my peculiar way felt insanely difficult, and what writing I was able to do was much more disconnected than my norm…much of brain fog can and should be attributed to factors in the environment. Just as your panic at being caught in a fire isn’t entirely in your head — there is actually a fire — brain fog isn’t all in your head: you’re in a foggy condition. You’re in an unnarratable world. The stories that you want to tell, and are used to telling, are suddenly less tellable.

This is where the entanglement with world-brain fog comes in.

Accounting for age, medium, and Covid-type effects, I think there remains a large unexplained factor in every case, though the fraction varies….I think there is something going on at the cultural, societal level, that makes it vastly harder to remember the gist of large bodies of information…But if I am right, unnarratability and world-brain fog should affect everybody, regardless of age and occupation, and I think I see signs everywhere that this is the case.

Fixed and Fluid Logics of Caring

Now we can ask the question. What does it mean for a world, specifically the real world to experience something analogous to what I just described at the individual level? What is world-brain fog? …And since there is nobody “there” to experience it, how does it manifest in the lives of us individuals who are like the neurons of the world brains we inhabit?

We’ve already seen one element of what it feels like. A sense that there’s more fogginess than you can attribute to your own circumstances…Here’s another: it’s hard to decide what to care about. Logics of caring are in fact essential in creating flow out of fog. The world is always complex. What you care about is what determines how simple it is for you. How you pick what to care about is your logic of caring.

…you might want a locus of care that is both stable, and world-like. This disposition is what I’m calling fixed logic of care…People with fixed logics of care love to talk about values, because that’s where the fixedness manifests explicitly.

…you might want a locus of care that follows the liveliest streams of events in the world. …You want to be where the action is, not where your values point. This disposition is a fluid logic of care.

fixed/fluid not the same as conservative/liberal,traditional/progressive, winning/losing

It might seem like I’ve set up an argument that admits no world-scale flow at all for either fixed or fluid logics of caring. This is incorrect. A few well-defined groups sneak through this sieve of constraints and appear to be experiencing world-scale flow. All of them operate by fixed logics of caring, but also have an additional factor in common: they rest atop what I call interposing intelligences.

Interposing Intelligences

The first well-defined group that seems to have retained a sense of world-scale flow is economists…anyone for whom the the global economy is the primary sense-making lens on reality…it’s all just been a game of watching various numbers go up and down in a curiously ahistorical mirror world. In that mirror world, there has been no Great Weirding.

There’s a reason for this. The economy offers one of the few remaining world-scale fixed logics of caring. To care through that logic about anything in the world at all is to care about it in economic terms. There’s even a term for this operation of bringing a matter into the fixed logic of care: “pricing it in.” To the economist-mind, economics is the primary phenomenological ground of the world. Things don’t become real until they become economically real. Intentions don’t become real until they become revealed preferences. Narratives don’t become real until they show up in indicators.

Now this is interesting. Economics seems to function in modernity as a better religion than actual religions. It allows you to have a sense of inhabiting the world rather than a besieged, paranoid corner of it. It allows you to care about the world in a fixed way, while still keeping up reasonably with its fluid, dynamic, changing nature. What it cannot accommodate, it can confidently treat as noise.

Unlike the changeless, distant gods or Gods of traditional religions, the God of economics is a live intelligence, doing all the fluid thinking so you can care in fixed ways. And it’s obvious why it’s able to do that. The economy is, after all, the closest thing to a live, planet-scale artificial intelligence.A different way to think about this helps generalize from this one data point. Economics provides a fixed logic of caring despite a complex, Permaweird world because it rests atop a vast, interposing⁵ intelligence that processes or tunes out most of the weirdness. A kind of intelligence that religion once embodied in a slower, less weird era. A Turing-complete pre-processing/post-processing layer, mediating your relationship to reality. I’m using the term interposing intelligence rather than container or matrix because the mediation has a more open and leaky topology. It allows you to compute with reality data more easily, but doesn’t necessarily seal you off in a bubble reality. Interposing intelligences are more like slippers than spacesuits; more like gardens than arcologies.

The cryptoeconomy is another obvious example, with blockchains playing the role of the interposing intelligence.

A third world is the world of machine learning, which is a rather on-the-nose kind of interposing intelligence layer. … There is a new world of people being born, whose relationship to reality is beginning to be entirely mediated by the interposing intelligence of machine learning.  

A fourth world is perhaps the oldest: the global institutional landscape peopled by careerists devoted to individual institutions. It’s not as obvious as in the case of the economy, but the institutional world (which its critics often refer to as the global Deep State) and its inhabitants (whom critics tend to refer to uniformly as “bureaucrats”) is in fact a world-scale computer that sustains a fixed logic of caring within itself. Shorn of the conspiratorial fantasies of critics, deep state is not a bad term for it.

Is there a way to hold on to a fixed logic of caring, without retreating from the world, and without resting on top of an interposing intelligence? I don’t think this is possible anymore.

Find Fluidity

The problem with everybody switching to fixed logics of caring is that it doesn’t solve the fogginess of the real world. In fact, even if all dozen or so consequential worlds that make up the real world were to harden into de facto fixed-logics-of-caring worlds that individually found flow within, you would still not be free of the fog in the real world. Combating fog in the real world requires at least a fraction of humanity operating by fluid logics of caring.

to want a fluid logic of care is to want “a locus of care that follows the liveliest streams of events in the world. …it used to work well until about 2015,

You could care about tech, for example. What was good for tech was good for the world, and vice versa. But unlike economics, tech does not offer a fixed logic for how to care.
Cosmopolitan globalism was another. Pre-wokism social justice was a third. Following basic scientific advances was a fourth.

But all these examples have “failed” in a certain way since 2015. You can still operate by them, but you will get lost in fog and lose all sense of flow. As a result, all these example worlds have succumbed to sclerotic fixed logics imported from adjacent domains. Technology is increasingly applied investment economics. Cosmopolitan globalism and social justice are now both applied Deep Statisms. No doubt other once-fluid logics of caring will get “compiled,” as it were, to fixed logics of caring running atop interposing intelligence layers.

So is there a way to retain a fluid logic of caring?
Reality — and this time I mean material reality — does indeed have a liberal bias in a rather precise
sense: it requires fluid logics of caring to de-fog. A logic of caring that follows the action instead of being driven by values.

No combination of fixed logics of caring will do the trick. Nor will operating atop a fixed interposing intelligence layer.

Multiple Interposing Intelligences

My big takeaway from the analysis so far is this: there is no way to retain flow in the world today without augmenting your intelligence in some way. This is evident even in my personal, rather primitive case of using hypertext as an integral element of my writing and sensemaking.

This is why all known examples of worlds in flow today rest atop powerful interposing intelligence layers that mediate relations to reality: the economy, blockchains, AI itself, and institutions. But the inescapable cost of this seems to be that fluid logics of caring become fixed, and our sense of the real world, as opposed to our favored individual ones, becomes vulnerable to fog.

To retain fluidity, you must retain an unmediated connection to reality. But the unaugmented brain is clearly not enough for that connection to be tractable to manage.

How do you resolve this paradox?

I think the trick is to inhabit more than one interposing intelligence layer. If you’re only an economist or only a deep-state institutionalist, you’ll retreat to a fixed logic of caring; a terminal derp.

But if you’re both, the possibility of fluid logics of caring remains, because the two interposing varieties of intelligence are not compatible. Sometimes they will be in conflict when they try to mediate your presence in the world. And in that conflict, you will find just enough of an unmediated connection to reality to continue caring about the world in a fluid way, without becoming overwhelmed by complexity.

A specific example of this is thinking about holding the stock of a company you work for. Both economic and institutional logics of caring apply, but neither provides a complete answer to the question of how much of the stock to hold, and when to sell. The two fixed answers will likely be incompatible, so you’ll need a fluid logic to integrate them. If you’re in the public sector, voting on taxes creates a similar tension.

I listed 4 world-scale interposing intelligences earlier, and each pairing seems to work well. Cryptoeconomics and traditional economics seem caught in a dialectic of discovering each other’s fundamental flaws and destablizing each other. Machine learning and blockchains seem headed for a collision via zero-knowledge proof technologies. Institutionalism and blockchains seem headed for a collision via smart contract technology. Institutionalism and economics have been the locus of the familiar Left/Right tension across the world.

I’ll let you work out the other combinations, but if you’ve tried thinking about the world through any two of the available interposing intelligences, you’ll realize how difficult it is. Difficult, but it’s possible. And at least in my case, the more I practice, the better I get at this (I try to straddle all four of the ones I’ve listed).

Why does this work? Why does it serve to “continue the game” in infinite game terms? One way to think about it is to think about life in terms of step-by-step decisions.

If you live within a traditional world that does not supply an interposing intelligence layer at all, you will mostly not have any decision-support at all that can keep up. Your decisions outside your shrinking world will be random and chaotic. Your instinct will be to restrict scope until all decisions are within the scope of your logic of caring, whether fluid or fixed.

If you live atop a single interposing intelligence, you will always have meaningful decision-support within a fixed logic of caring. You’ll have a take on everything, nad feel in flow within your world, but have a sense of the “real world” you share with others being in a state of insane chaos. It would all make sense and flow beautifully if only those others stopped being stupid and joined your world.

But if you live atop more than one interposing intelligence, you will have to choose at every step whether to tap into one of the available fixed logics of caring (picking a side), or improvising your own choice. In the latter case, your thinking will leak through and connect to reality in an unmediated way. If you’re able to do this consistently, you will likely experience increasing amounts of flow, and also beat back the fogginess of the real world, not just your own world.

And this notion of straddling a sort of plate-tectonics of multiple interposing intelligences, with gaps, faultlines and inconsistencies, is the reason the resulting narrative is a kind of protocol narrative. The narrative of the real world emerges out of an interoperable network of world narratives. And through the conflicts between worlds, the infinite game keeps renewing itself.

But it takes a critical mass of humans operating by fluid logics of caring for this to happen. And until that critical mass is reached, the real world will remain foggy for everybody. And trying to be in that minority will be a thankless and stressful task, immersed in fog.
But then again, public service has never been an easy calling.