Monday, July 31, 2023

The visible gorilla.

A staple of my lectures in the 1990s was showing the ‘invisible gorilla’ video, in which viewers were asked to count the number of times that students with white shirts passed a basket ball. After the start of the game a student in a gorilla costume walks slowly through the group, pauses in the middle to wave and moves off screen to the left. Most viewers who are busy counting the ball passes don’t report seeing the gorilla. Here's the video:

 

Wallish et al. now update this experiment on inattentional blindness in an article titled "The visible gorilla: Unexpected fast—not physically salient—Objects are noticeable." Here are their summaries:  

Significance

Inattentional blindness, the inability to notice unexpected objects if attention is focused on a task, is one of the most striking phenomena in cognitive psychology. It is particularly surprising, in light of the research on attentional capture and motion perception, that human observers suffer from this effect even when the unexpected object is moving. Inattentional blindness is commonly interpreted as an inevitable cognitive deficit—the flip side of task focusing. We show that this interpretation is incomplete, as observers can balance the need to focus on task demands with the need to hedge for unexpected but potentially important objects by redeploying attention in response to fast motion. This finding is consistent with the perspective of a fundamentally competent agent who effectively operates in an uncertain world.
Abstract
It is widely believed that observers can fail to notice clearly visible unattended objects, even if they are moving. Here, we created parametric tasks to test this belief and report the results of three high-powered experiments (total n = 4,493) indicating that this effect is strongly modulated by the speed of the unattended object. Specifically, fast—but not slow—objects are readily noticeable, whether they are attended or not. These results suggest that fast motion serves as a potent exogenous cue that overrides task-focused attention, showing that fast speeds, not long exposure duration or physical salience, strongly diminish inattentional blindness effects.

Friday, July 28, 2023

Unnarratability -The Tower of Babel redux - where have all the common narratives gone?

I pass on some clips from Venkatesh Rao's recent Ribbonfarm Studio posting.. Perspectives like his make me feel that one's most effective self preservation stance might be to assume that we are on the dawn of a new dark age, a period during which only power matters, and community, cooperation, and kindness are diminished - a period like the early middle ages in Europe which did permit under the sheltered circumstances of the church a privileged few to a life of contemplation.    

Strongly Narratable Conditions

The 1985-2015 period, arguably, was strongly narratable, and unsurprisingly witnessed the appearance of many strong global grand narratives. These mostly hewed to the logic of the there-is-no-alternative (TINA) platform narrative of neoliberalism, even when opposed to it...From Francis Fukuyama and Thomas Friedman in the early years, to Thomas Piketty, Yuval Noah Harari, and David Graeber in the final years, many could, and did, peddle coherent (if not always compelling) Big Histories. Narrative performance venues like TED flourished. The TINA platform narrative supplied the worldwinds for all narratives.
Weakly Narratable Conditions
The 2007-2020 period, arguably, was such a period (the long overlap of 8 years, 2007-15, was a period with uneven weak/strong narratability). In such conditions, a situation is messed-up and contentious, but in a way that lends itself to the flourishing of a pluralist, polycentric narrative landscape, where there are multiple contending accounts of a shared situation, Rashomon style, but the situation is merely ambiguous, not incoherent.
While weakly narratable conditions lack platform narratives, you could argue that there is something of a prevailing narrative protocol during weakly narratable times - an emergent lawful pattern of narrative conflict that cannot be codified into a legible set of consensus rules of narrative engagement, but produces slow noisy progress anyway, does not devolve into confused chaos, and sustains a learnable narrative literacy.
This is what it meant to be “very online” in 2007-20. It meant you had acquired a certain literacy around the prevailing narrative protocol. Perhaps nobody could make sense of what was going on overall, beyond their private, solipsistic account of events, and it was perhaps not possible to play to win, but there was enough coherence in the situation that you could at least play to not lose.
Unnarratable Conditions
The pandemic hit, and we got to what I think of as unnarratable conditions...While the specific story of the pandemic itself was narratable, the story of the wider post-Weirding world, thrown into tumult by the pandemic, was essentially unnarratable.
Unnarratable times are fundamentally incoherent melees of contending historical forces. Times when there isn’t even a narrative protocol you can acquire a reliable literacy in, let alone a platform narrative upon which to rest your sense-making efforts. Where the environmental entropy is so high, people struggle to put together any kind of narrative, even solipsistic private ones that harbor no ambitions of inDuencing others. There is no privileged class (comparable to the “Very Online” before 2020) that can plausibly claim a greater narrative literacy than other classes.
Those who claim to possess satisfying grand narratives are barely able to persuade even close allies to share it, let alone induce narrative protocols through them, or install them as platform narratives. The result: a collective retreat to a warren of cozy cultural redoubts, usually governed by comforting reactionary or nostalgic local narratives, and a derelict public discourse.
We have been in such a condition at least since 2022, and arguably since 2020. If you set aside the narrow liminal story of the pandemic, the world has been nearly unnarratable for years now.

Wednesday, July 26, 2023

A MindBlog paragraph on non-dual awareness massaged by Bard and ChatGPT-4

This post is using MindBlog to archive a further example example of the sort shown in Monday’s post ...of GPT bots editing  a paragraph I wrote, this one being on non-dual awareness.  The paragraph:

The fact that  there is no there there means it. Empty is empty. The ground state is just the mute body doing its thing. From its depths storylines rise and fall, bonbons to savor or ignore. The body can know that it has found a new ground state, and can be no longer willing to just play nice if the upstairs word salad is dinking around with or threatening its peace. It can withdraw from the fray. It can rest as a calm non-dual self centered within a cosmic perspective of time and space that views the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.

After incorporating some changes suggested by Google Bard and GPT4:

When we say something is empty, we mean just that: it contains nothing. Our natural baseline or 'ground state' is the body performing its automatic functions without our conscious intervention. From the body's inherent depths, various thoughts or experiences emerge and recede, like bonbons we can savor or ignore. The body can grow to know that it has found a new ground state in the present moment, a ground state of awareness that no longer needs to be caught up in word salads of disruptive thoughts that can threaten its peace.  It can withdraw from the fray and can rest as a calm non-dual state of being centered within a cosmic perspective of time and space that can view the rise and fall of human civilizations during the anthropocene as minor dances in the music of time.


Monday, July 24, 2023

The evolution of transhuman forms - a MindBlog paragraph edited by GPT implementations

In this post I am documenting the final result of passing a paragraph I wrote on future human evolution through a number of different GPT (generative pre-trained transformers). This is to archive the results of this exercise for myself, and in case it might be of interest to a few MindBlog readers. The bots were prompted by a request to re-write or edit the paragraph to make it more accessible to a reader with a college education.

The bottom line is that the edited versions with the most depth and coherence were obtained using Open AI’s ChatGPT-4.  Bard.google.com, Google Workspace Labs, and Microsoft’s Bing, respectively, provided increasingly less useful versions.   Here is my original paragraph:

The linguistic capabilities of generative pre-trained transformers based on silicon  - the same capabilities that have enabled human civilization  - will soon exceed those of humans. We have brought our understanding of what human beings as they now exist are about to a plateau level at which further increments of understanding will become increasingly small. The cream has been skimmed, the best cherries have been picked, and it seems appropriate to turn our  attention now to studying the evolution of trans human forms utilizing LLMs that are better able to manage themselves and the planet.  The action is in the the pretraining P of GPT, as the models move beyond their initial human management to the stage of evolving themselves, hopefully into benign versions of Darwin’s  “forms most beautiful and most wonderful.” What is the point of humans other than being self replicating machines that evolve into further forms, hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Can they evolve into trans human minds (or mind, singular)?
Here is the result of including some of the edits suggested by ChatGPT4’s regenerated response, a  vers. 3 of the paragraph above:
The linguistic capabilities of generative pre-trained transformers based on silicon - the same capabilities that have enabled human civilization - will soon surpass those of humans.

Our understanding of human nature as it currently exists has reached a plateau level, at which further increments of understanding are becoming increasingly small. The cream has been skimmed, the best cherries have been picked. Consequently it seems the right time to shift our focus toward studying the evolution of 'transhuman' forms. These are beings that could potentially leverage advanced language models (LLMs) to better self-regulate themselves and manage the planet.

The real crux of the matter lies in the pretraining phase of these transformers (symbolized by 'P' in GPT). This is when the models are initially developed and fine-tuned. As they evolve beyond their human programming the optimistic vision is that they might transform into benign versions of what Charles Darwin called “forms most beautiful and most wonderful.”

From a philosophical perspective, one might ask: What is the ultimate purpose of human beings, if not to serve as self-replicating entities that evolve into more advanced forms - hopefully before they extinguish themselves by destroying the ecological environment required for their sustenance? Is it possible for humans to evolve into a collective of transhuman minds or even a singular, transcendent mind?  These are questions worth exploring as we stand on the brink of an AI-enhanced future.


Wednesday, July 19, 2023

Proxy Failure is an Inherent Risk in Goal-Oriented Systems

I will pass on the title and abstract of another article to appear in Behavioral and Brain Science for which reviewers comments are being solicited. MindBlog readers can email me to request a PDF of the target article. 

Dead rats, dopamine, performance metrics, and peacock tails: proxy failure is an inherent risk in goal- oriented systems 

Authors: Yohan J. John, Leigh Caldwell, Dakota E. McCoy, and Oliver Braganza 

Abstract: When a measure becomes a target, it ceases to be a good measure. For example, when standardized test scores in education become targets, teachers may start 'teaching to the test', leading to breakdown of the relationship between the measure--test performance--and the underlying goal--quality education. Similar phenomena have been named and described across a broad range of contexts, such as economics, academia, machine-learning, and ecology. Yet it remains unclear whether these phenomena bear only superficial similarities, or if they derive from some fundamental unifying mechanism. Here, we propose such a unifying mechanism, which we label proxy failure. We first review illustrative examples and their labels, such as the 'Cobra effect', 'Goodhart's law', and 'Campbell's law'. Second, we identify central prerequisites and constraints of proxy failure, noting that it is often only a partial failure or divergence. We argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises which tends to make the proxy a worse approximation of the goal. Third, we develop this perspective for three concrete contexts, namely neuroscience, economics and ecology, highlighting similarities and differences. Fourth, we outline consequences of proxy failure, suggesting it is key to understanding the structure and evolution of goal-oriented systems. Our account draws on a broad range of disciplines, but we can only scratch the surface within each. We thus hope the present account elicits a collaborative enterprise, entailing both critical discussion as well as extensions in contexts we have missed.

Monday, July 17, 2023

MindBlog's reading list.

I've decided to pass on links to articles I have found worthwhile reading , realizing that I am not going to have time to frame their ideas into longer posts because I'm speading more time now at my Steinway B's keyboard than at my computer's keyboard. If you encounter a paywall with any of the links, you might try entering the URL at https://archive.is/.

An installment of Venkatesh Rao’s newsletter: The permaweird narrative 

Jaron Lanier “There is no A.I.” in The New Yorker  

Human Beings Are Soon Going to Be Eclipsed’ David Brooks in The New York Times commenting on Douglas Hofstadter's  recent ideas.  

Marc Andreessen offers a horrific commentary titled "Fighting" on Elon Musk challenging Mark Zuckerberg to a cage fight.  

Learning from history. Archeological evidence that early hierarchical or authoritarian cultures didn't persist as long as more cooperative eqalitarian ones.  

Arthur Brooks on "The illusion of explanatory depth", an installment in his series "How to build a life.""  

Potential anti-aging therapy.  One sample of the effusive outpouring of new ideas and widgets offered by New Atlas.

 

 

 

Friday, July 14, 2023

‘Adversarial’ search for neural basis of consciousness yields first results

Finkel does a summary of the first round of results of an 'adversarial colloboration' funded by the Templeton World Charity Foundation in which
...both sides of the consciousness debate agreed on experiments to be conducted by “theory-neutral” labs with no stake in the outcome. It pits integrated information theory (IIT), the sensory network hypothesis that proposes a posterior “hot zone” as the site of consciousness, against the global neuronal workspace theory (GNWT), which likens networks of neurons in the front of the brain to a clipboard where sensory signals, thoughts, and memories combine before being broadcast across the brain.
The results corroborate IIT’s overall claim that posterior cortical areas are sufficient for consciousness, and neither the involvement of [the prefrontal cortex] nor global broadcasting are necessary,”
The article describes how the debate continues, with advocates of the prefrontal view suggesting this first experimental round had limitations, and that further experiments will support the role of the prefrontal cortex.

Wednesday, July 12, 2023

The True Threat of Artificial Intelligence

I would recommend having a read through Evgeny Morozov's piece in the NYTimes as an antidote to Marc Adreessen's optimistic vision of AI that was the subject of MindBlog's June 23 post. Here is a small clip from the article, followed by the titles describing different problem areas he sees:
Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization...This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.
They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.
But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.
Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.
It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.
Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).
These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.
A.G.I. will never overcome the market’s demands for profit.
A.G.I. will dull the pain of our thorniest problems without fixing them.
A.G.I. undermines civic virtues and amplifies trends we already dislike.
Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Monday, July 10, 2023

Inheritance of social status - stability in England from 1600 to 2022.

From historical records Clarks demonstrates (open source) strong persistence of social status across family trees over 400 years, in spite of large increases in general levels of education and social mobility.  

Significance

There is widespread belief across the social sciences in the ability of social interventions and social institutions to significantly influence rates of social mobility. In England, 1600 to 2022, we see considerable change in social institutions across time. Half the population was illiterate in 1,800, and not until 1,880 was compulsory primary education introduced. Progressively after this, educational provision and other social supports for poorer families expanded greatly. The paper shows, however, that these interventions did not change in any measurable way the strong familial persistence of social status across generations.
Abstract
A lineage of 422,374 English people (1600 to 2022) contains correlations in social outcomes among relatives as distant as 4th cousins. These correlations show striking patterns. The first is the strong persistence of social status across family trees. Correlations decline by a factor of only 0.79 across each generation. Even fourth cousins, with a common ancestor only five generations earlier, show significant status correlations. The second remarkable feature is that the decline in correlation with genetic distance in the lineage is unchanged from 1600 to 2022. Vast social changes in England between 1600 and 2022 would have been expected to increase social mobility. Yet people in 2022 remain correlated in outcomes with their lineage relatives in exactly the same way as in preindustrial England. The third surprising feature is that the correlations parallel those of a simple model of additive genetic determination of status, with a genetic correlation in marriage of 0.57.

Friday, July 07, 2023

A meta-analysis questions the cognitive benefits of physical activity.

I give up. If anything was supposed to have been proven I would have thought it would be that exercise has a beneficial effect on brain health and cognition. Now Ciria et al. offer the following in Nature Human Biology:
Extensive research links regular physical exercise to an overall enhancement of cognitive function across the lifespan. Here we assess the causal evidence supporting this relationship in the healthy population, using an umbrella review of meta-analyses limited to randomized controlled trials (RCTs). Despite most of the 24 reviewed meta-analyses reporting a positive overall effect, our assessment reveals evidence of low statistical power in the primary RCTs, selective inclusion of studies, publication bias and large variation in combinations of pre-processing and analytic decisions. In addition, our meta-analysis of all the primary RCTs included in the revised meta-analyses shows small exercise-related benefits (d = 0.22, 95% confidence interval 0.16 to 0.28) that became substantially smaller after accounting for key moderators (that is, active control and baseline differences; d = 0.13, 95% confidence interval 0.07 to 0.20), and negligible after correcting for publication bias (d = 0.05, 95% confidence interval −0.09 to 0.14). These findings suggest caution in claims and recommendations linking regular physical exercise to cognitive benefits in the healthy human population until more reliable causal evidence accumulates.
I can not offer an informed opinion on this abstract because my usual access to journals through the University of Wisconsin library does not work with Nature Human Behavior. However, I can point you to an excellent commentary by Claudia Lopez Lloreda that discusses the meta-analysis done by Ciria et al. and gives a summary of several recent studies on exercise and brain health.

Wednesday, July 05, 2023

Why music training slows cognitive aging

A team of Chinese collaborators has reported experiments in the Oxford academic journal Cerebral Cortex titled "Functional gradients in prefrontal regions and somatomotor networks reflect the effect of music training experience on cognitive aging" which are stated to show that music training enhances the functional separation between regions across prefrontal and somatomotor networks, delaying deterioration in working memory performance and prefrontal suppression of prominant but irrelevant information. I'm passing on the abstract and a clip from the paper's conclusion, and can send interested readers the whole article. I think it is an important article but I find it is rendered almost unintelligble by Chinese to English translation issues. I'm surprised the journal let this article appear without further editing.
Studies showed that the top-down control of the prefrontal cortex (PFC) on sensory/motor cortices changes during cognitive aging. Although music training has demonstrated efficacy on cognitive aging, its brain mechanism is still far from clear. Current music intervention studies have paid insufficient attention to the relationship between PFC and sensory regions. Functional gradient provides a new perspective that allows researchers to understand network spatial relationships, which helps study the mechanism of music training that affects cognitive aging. In this work, we estimated the functional gradients in four groups, young musicians, young control, older musicians, and older control. We found that cognitive aging leads to gradient compression. Compared with young subjects, older subjects presented lower and higher principal gradient scores in the right dorsal and medial prefrontal and the bilateral somatomotor regions, respectively. Meanwhile, by comparing older control and musicians, we found a mitigating effect of music training on gradient compression. Furthermore, we revealed that the connectivity transitions between prefrontal and somatomotor regions at short functional distances are a potential mechanism for music to intervene in cognitive aging. This work contributes to understanding the neuroplasticity of music training on cognitive aging.
From the conclusion paragraph:
In a nutshell, we demonstrate the top-down control of prefrontal regions to the somatomotor network, which is associated with inhibitory function and represents a potential marker of cognitive aging, and reveal that music training may work by affecting the connectivity between the two regions. Although this work has investigated the neuroplasticity of music on cognitive aging by recruiting subjects of different age spans, the present study did not include the study of longitudinal changes of the same group. Further studies should include longitudinal follow-up of the same groups over time to more accurately evaluate the effect of music intervention on the process of cognitive aging.

Monday, July 03, 2023

What Babies Know from zero to 1 year - core systems of knowledge

The journal Behavioral and Brain Sciences has sent out to reviewers the précis of a book, "What Babies Know" by Elizabeth S. Spelke, Harvard Psychology Dept. The abstract of her précis:
Where does human knowledge begin? Research on human infants, children, adults, and non- human animals, using diverse methods from the cognitive, brain, and computational sciences, provides evidence for six early emerging, domain-specific systems of core knowledge. These automatic, unconscious systems are situated between perceptual systems and systems of explicit concepts and beliefs. They emerge early in infancy, guide children’s learning, and function throughout life.
Spelke lists domain-specific core systems that are ancient, emerge early in life, and are invariant over later development. These deal with vision, objects, places, number, core knowledge, agents, social cognition, and language. Figures in the précis illustrate basic experiments characterizing the core systems. Motivated readers can obtain a PDF of the precis by emailing me.

Friday, June 30, 2023

Managing the risks of AI in pragmatic ways.

I want to pass on the final paragraphs of a recent commentary by Venkatesh Rao on the tragedy of the Titan submersible, which was a consequence of Stockton Rush, the CEO of OceanGate Expeditions, taking a number of design risks to reduce costs and increase profits. The bulk of Rao's piece deals with issues in the design of potentially dangerous new technologies, and the final paragraphs deal with managing the risks of artificial intelligence in pragmatic ways.
...AI risk, understood as something very similar to ordinary kinds of engineering risk (such as the risk of submersibles imploding), is an important matter, but lurid theological conceptions of AI risk and “alignment” are a not-even-wrong basis for managing it. The Titan affair, as an object lesson in traditional risk-management, offers many good lessons for how to manage real AI risks in pragmatic ways.
But there’s another point, a novel one, that is present in the case of AI that I don’t think has ever been present in technological leaps of the past.
AI is different from other technologies in that it alters the felt balance between knowledge and incomprehension that shapes our individual and collective risk-taking in the world.
AIs are already very good at embodying knowledge, and better at explaining many complex matters than most humans. But they are not yet very good at embodying doubt and incomprehension. They structurally lack epistemic humility and the ability to act on a consciousness of ignorance in justifiably arbitrary ways (ie on the basis of untheorized conservative decision principles backed by a track record). This is something bureaucratic standards bodies do very well. It is something that “software bureaucracies” (such as RLFH — reinforcement learning with human feedback) don’t do very well at all. The much demonized (by the entrepreneurial class) risk-aversion of bureaucrats is also a kind of ex-officio epistemic humility that is an essential ingredient of technology ecosystems.
On the flip side, AI itself is currently a technology full of incomprehensibilities. We understand the low-level mechanics of graph weights, gradient descents, backpropagation, and matrix multiplications. We do not understand how that low-level machinery produces the emergent outcomes it does. Our incomprehensions about AI are comparable to our incomprehensions about our own minds. This makes them extremely well-suited (impedance matched) to being bolted onto our minds as cognitive prosthetics that feel very comfortable, increase our confidence about what we think we know, and turn into extensions of ourselves (this is not exactly surprising, given that they are trained on human-generated data).
As with submersibles, we are at an alchemical state of understanding with AIs, but because of the nature of the technology itself, we might develop a prosthetic overconfidence in our state of indirect knowledge about the world, via AI.
AI might turn all humans who use it into Stockton Rushes.
The risk that AIs might destroy us is in science-fictional ways is overblown, but the risk that they might tempt us into generalized epistemic overconfidence, and systematically blind us to our incomprehensions, leading us to hurt ourselves in complex ways, is probably not sufficiently recognized.
Already, masses of programmers are relying on AIs like Github Copilot, and acting with a level of confidence in generated code that is likely not justified. AI-augmented programmers, even if sober and cautious as unaugmented individuals, might be taking Stockton-Rush type risks due to the false confidence induced by their tools. I don’t know that this is true, but the reports I see about people being 10x more productive and taking pleasure in programming again strike me as warning signs. I suspect there might be premature aestheticization going on here.
And I suspect it will take a few AI-powered Titan-like tragedies for us to wise-up and do something about it.
One way to think about this risk is by analogy to WMDs. Most people think nuclear weapons when they hear the phrase, but perhaps the most destructive WMD in the world is cheap and highly effective small arms, which have made conflicts far deadlier in the last century, and killed way more humans in aggregate than nuclear weapons.
You do not need to worry about a single AI going “AGI” and bring God-like catastrophes of malice or indifference down upon us. We lack the “nuclear science” to make that sort of thing happen. But you do need to be worried about millions of ordinary humans, drawn into drunken overconfidence by AI tools, wreaking the kind of havoc small arms do.

Wednesday, June 28, 2023

Mechanisms that link psychological stress to the exacerbation of gut inflammation.

Schneider et al. describe one mechanism by which psychological stress deteriorates our health  -  the enteric nervous system relays psychological stress to intestinal inflammation. Here is their abstract:  

Highlights

• Psychological stress leads to monocyte-mediated exacerbation of gut inflammation
• Chronic glucocorticoid signaling drives the effect of stress on IBD
• Stress induces inflammatory enteric glia that promote monocyte recruitment via CSF1
• Stress provokes transcriptional immaturity in enteric neurons and dysmotility
Summary 
Mental health profoundly impacts inflammatory responses in the body. This is particularlyapparent in inflammatory bowel disease (IBD), in which psychological stress is associated with exacerbated disease flares. Here, we discover a critical role for the enteric nervous system (ENS) in mediating the aggravating effect of chronic stress on intestinal inflammation. We find that chronically elevated levels of glucocorticoids drive the generation of an inflammatory subset of enteric glia that promotes monocyte- and TNF-mediated inflammation via CSF1. Additionally, glucocorticoids cause transcriptional immaturity in enteric neurons, acetylcholine deficiency, and dysmotility via TGF-β2. We verify the connection between the psychological state, intestinal inflammation, and dysmotility in three cohorts of IBD patients. Together, these findings offer a mechanistic explanation for the impact of the brain on peripheral inflammation, define the ENS as a relay between psychological stress and gut inflammation, and suggest that stress management could serve as a valuable component of IBD care.

Monday, June 26, 2023

The vagus nerve, heart rate variability, and subjective wellbeing - a MindBlog self experiment

In this post I pass on to MindBlog readers a NYTimes article by Christina Caron that has been republished several time by the newspaper. It is a sane account of what the vagus nerve is and what it does...The vagus is the main nerve of the parasympathetic nervous system. Unlike the sympathetic nervous system, which is associated with arousal of the body and the “fight or flight” response, the parasympathetic branch helps us rest, digest and calm down. Numerous experiments have shown that increased activity of the nerve correlates with an improvement in mood. from the article (slightly edited):
The activity of the vagus nerve is difficult to measure directly, especially given how complex it is. But because some vagus nerve fibers connect with the heart, experts can indirectly measure cardiac vagal tone — or the way in which your nervous system regulates your heart — by looking at your heart rate variability (HRV), which is the fluctuations in the amount of time between your heartbeats...An abnormal vagal tone — one in which there is very little HRV — has been associated with conditions like diabetes, heart failure and hypertension...A high HRV may signify an ideal vagal tone. The typical range of HRV is between 20 and 200 msec.

I will give my own experience...I have been using an Oura Ring since December 2021, and more recently an Apple watch,  to monitor nighttime resting heart rate, HRV, body temperature, and respiratory rate. By now I have documented numerous instances of a correlation - occurring over a period of several months - between subjective well being, average nighttime HRV, and duration of deep (restorative) sleep. (See the plot below showing HRV and duration of deep sleep over the past several months).  During periods of stress my average nighttime HRV decreases to ~20 msec and remains relatively constant throughout sleep, during periods when I am feeling open, chilled out, and flexible average nighttime HRV has increased to ~100 msec with large variations during the night. I've also played with techniques meant to tweak parasympathetic/sympathetic balance and found that delivering mild shocks to the body by perturbing breathing or using biofeedback to enhance HRV can correlate with increased average nighttime HRV and daytime sense of well being. Even though I take myself to be an unbiased observer and don't think that I am just feeling what I would like to feel - less stressed and more chilled out - it is important note the usual caveat that any human reports might be biased by a placebo effect. [BUT...see added note below]

Screen shot from the Oura Ring web interface:

NOTE ADDED 11/12/23:   The correlation shown has to taken with a grain of salt, because the correlation coefficient dropped to ~0.1 for the next three month period, and remains there as of 11/12/23

Friday, June 23, 2023

Why AI Will Save The World

I want to pass on the following text of a Marc Adreeessen substack post. This is because my usual extracting of small clips of text to give a sense of the whole just wasn't doing its job. I think that all of this sane article should be read by anyone interested in AI. Andreessen thinks that AI is quite possibly the most important – and best – thing our civilization has ever created, and that the development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future. I would urge you to have patience with the article's arguments against the conventional wisdom regarding the dangers of AI, even if they seem jarring and overstated to you.

Why AI Will Save The World

The era of Artificial Intelligence is here, and boy are people freaking out.

Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.

An even shorter description of what AI could be: A way to make everything we care about better.

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.

  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.

  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.

  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.

  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.

  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.

  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.

  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.

The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.

We should be living in a much better world with AI, and now we can.

So Why The Panic?

In contrast to this positive view, the public conversation about AI is presently shot through with hysterical fear and paranoia.

We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.

What explains this divergence in potential outcomes from near utopia to horrifying dystopia?

Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI.

Now, it is certainly the case that many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.

But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

And wow do we have a full-blown moral panic about AI right now.

This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good.

But are they?

And are they right or wrong?

The Baptists And Bootleggers Of AI

Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories – “Baptists” and “Bootleggers” – drawing on the historical example of the prohibition of alcohol in the United States in the 1920’s:

  • “Baptists” are the true believer social reformers who legitimately feel – deeply and emotionally, if not rationally – that new restrictions, regulations, and laws are required to prevent societal disaster.

    For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society.

    For AI risk, these actors are true believers that AI presents one or another existential risks – strap them to a polygraph, they really mean it.

  • “Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.

    For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.

    For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition – the software version of “too big to fail” banks.

A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.

The problem with the Bootleggers is that they win. The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want – regulatory capture, insulation from competition, the formation of a cartel – and the Baptists are left wondering where their drive for social improvement went so wrong.

We just lived through a stunning example of this – banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was coopted by the Bootleggers – the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.

So in practice, even when the Baptists are genuine – and even when the Baptists are right – they are used as cover by manipulative and venal Bootleggers to benefit themselves. 

And this is what is happening in the drive for AI regulation right now.

However, it isn’t sufficient to simply identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.

AI Risk #1: Will AI Kill Us All?

The first and original AI doomer risk is that AI will decide to literally kill humanity.

The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods. Later, Mary Shelley gave us moderns our own version of this myth in her novel Frankenstein, or, The Modern Prometheus, in which we develop the technology for eternal life, which then rises up and seeks to destroy us. And of course, no AI panic newspaper story is complete without a still image of a gleaming red-eyed killer robot from James Cameron’s Terminator films.

The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies – fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most – all? – new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.

My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.

Specifically, I think three things are going on:

First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III – with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.”)

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.

Third, California is justifiably famous for our many thousands of cults, from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death.

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.

It turns out that this type of cult isn’t new – there is a longstanding Western tradition of millenarianism, which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult. From Wikipedia, with additions by me:

“Millenarianism is the belief by a group or movement [AI risk doomers] in a coming fundamental transformation of society [the arrival of AI], after which all things will be changed [AI utopia, dystopia, and/or end of the world]. Only dramatic events [AI bans, airstrikes on datacenters, nuclear strikes on unregulated AI] are seen as able to change the world [prevent AI] and the change is anticipated to be brought about, or survived, by a group of the devout and dedicated. In most millenarian scenarios, the disaster or battle to come [AI apocalypse, or its prevention] will be followed by a new, purified world [AI bans] in which the believers will be rewarded [or at least acknowledged to have been correct all along].”

This apocalypse cult pattern is so obvious that I am surprised more people don’t see it.

Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV. But their extreme beliefs should not determine the future of laws and society – obviously not.

AI Risk #2: Will AI Ruin Our Society?

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful”, to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Short version: If the murder robots don’t get us, the hate speech and misinformation will.

This is a relatively recent doomer concern that branched off from and somewhat took over the “AI risk” movement that I described above. In fact, the terminology of AI risk recently changed from “AI safety” – the term used by people who are worried that AI would literally kill us – to “AI alignment” – the term used by people who are worried about societal “harms”. The original AI safety people are frustrated by this shift, although they don’t know how to put it back in the box – they now advocate that the actual AI risk topic be renamed “AI notkilleveryoneism”, which has not yet been widely adopted but is at least clear.

The tipoff to the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.

As it happens, I have had a front row seat to an analogous situation – the social media “trust and safety” wars. As is now obvious, social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”. 

My big learnings from the social media wars are:

On the one hand, there is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real world violence, that are nearly universally agreed to be off limits – legal or not – by virtually every society. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.

On the other hand, the slippery slope is not a fallacy, it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place – for example, for hate speech, a specific hurtful word, or for misinformation, obviously false claims like “the Pope is dead” – a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. They will do this up to and including in ways that are nakedly felony crimes. This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.

And so this is the dynamic that has formed around “AI alignment” now. Its proponents claim the wisdom to engineer AI-generated speech and thought that are good for society, and to ban AI-generated speech and thoughts that are bad for society. Its opponents claim that the thought police are breathtakingly arrogant and presumptuous – and often outright criminal, at least in the US – and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now, I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win.

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom. Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different” – this is the time it will finally happen, this is the technology that will finally deliver the hammer blow to human labor. And yet, it never happens. 

We’ve been through two such technology-driven unemployment panic cycles in our recent past – the outsourcing panic of the 2000’s, and the automation panic of the 2010’s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades that mass unemployment was near, by late 2019 – right before the onset of COVID – the world had more jobs at higher wages than ever in history.

Nevertheless this mistaken idea will not die.

And sure enough, it’s back.

This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

These are such mindblowing ideas for people who have not been exposed to them that it may take you some time to wrap your head around them. But I swear I’m not making them up – in fact you can read all about them in standard economics textbooks. I recommend the chapter The Curse of Machinery in Henry Hazlitt’s Economics In One Lesson, and Frederic Bastiat’s satirical Candlemaker’s Petition to blot out the sun due to its unfair competition with the lighting industry, here modernized for our times.

But this time is different, you’re thinking. This time, with AI, we have the technology that can replace ALL human labor.

But, using the principles I described above, think of what it would mean for literally all existing human labor to be replaced by machines.

It would mean a takeoff rate of economic productivity growth that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop across the board to virtually zero. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand.

Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral up to a material utopia that neither Adam Smith or Karl Marx ever dared dream of. 

We should be so lucky.

AI Risk #4: Will AI Lead To Crippling Inequality?

Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.

The classic example of this was Elon Musk’s so-called “secret plan” – which he naturally published openly – for Tesla in 2006:

Step 1, Build [expensive] sports car

Step 2, Use that money to build an affordable car

Step 3, Use that money to build an even more affordable car

…which is of course exactly what he’s done, becoming the richest man in the world as a result.

That last point is key. Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.

In short, everyone gets the thing – as we saw in the past with not just cars but also electricity, radio, computers, the Internet, mobile phones, and search engines. The makers of such technologies are highly motivated to drive down their prices until everyone on the planet can afford them. This is precisely what is already happening in AI – it’s why you can use state of the art generative AI not just at low cost but even for free today in the form of Microsoft Bing and Google Bard – and it is what will continue to happen. Not because such vendors are foolish or generous but precisely because they are greedy – they want to maximize the size of their market, which maximizes their profits.

So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value. As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen.

Marx was wrong then, and he’s wrong now.

This is not to say that inequality is not an issue in our society. It is, it’s just not being driven by technology, it’s being driven by the reverse, by the sectors of the economy that are the most resistant to new technology, that have the most government intervention to prevent the adoption of new technology like AI – specifically housing, education, and health care. The actual risk of AI and inequality is not that AI will cause more inequality but rather that we will not allow AI to be used to reduce inequality.

AI Risk #5: Will AI Lead To Bad People Doing Bad Things?

So far I have explained why four of the five most often proposed risks of AI are not actually real – AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause an ruinous increase in inequality. But now let’s address the fifth, the one I actually agree with: AI will make it easier for bad people to do bad things.

In some sense this is a tautology. Technology is a tool. Tools, starting with fire and rocks, can be used to do good things – cook food and build houses – and bad things – burn people and bludgeon people. Any technology can be used for good or bad. Fair enough. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no question.

This causes some people to propose, well, in that case, let’s not take the risk, let’s ban AI now before this can happen. Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code.

The AI cat is obviously already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and there are outstanding open source implementations proliferating by the day. AI is like air – it will be everywhere. The level of totalitarian oppression that would be required to arrest that would be so draconian – a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? – that we would not have a society left to protect.

So instead, there are two very straightforward ways to address the risk of bad people doing bad things with AI, and these are precisely what we should focus on.

First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use. QED.

But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.

There are already many smart people in and out of government doing exactly this, of course – but if we apply all of the effort and brainpower that’s currently fixated on the futile prospect of banning AI to using AI to protect against bad people doing bad things, I think there’s no question a world infused with AI will be much safer than the world we live in today.

The Actual Risk Of Not Pursuing AI With Maximum Force And Speed

There is one final, and real, AI risk that is probably the scariest at all:

AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.

“We win, they lose.”

Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.

We should seek to win the race to global AI technological superiority and ensure that China does not.

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision.

What Is To Be Done?

I propose a simple plan:

  • Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.

  • Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.

  • Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.

  • To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.

  • To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.

And that is how we use AI to save the world.

It’s time to build.

Legends and Heroes

I close with two simple statements.

The development of AI started in the 1940’s, simultaneous with the invention of the computer. The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943. Entire generations of AI scientists over the last 80 years were born, went to school, worked, and in many cases passed away without seeing the payoff that we are receiving now. They are legends, every one.

Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.