Monday, December 30, 2024

Awe as a Pathway to Mental and Physical Health

Reading this open source review from Maria Monroy and Dacher Keltner leaves me feeling substantially more mellow! Their abstract, followed by a quote from Emerson, and then a summary graphic...

How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes—shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning—that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
and,
In the woods, we return to reason and faith. There I feel that nothing can befall me in life—no disgrace, no calamity (leaving me my eyes), which nature cannot repair. Standing on the bare ground—my head bathed by the blithe air and uplifted into infinite space—all mean egotism vanishes. I become a transparent eyeball; I am nothing; I see all; the currents of the Universal Being circulate through me; I am part or parcel of God. The name of the nearest friend sounds then foreign and accidental; to be brothers, to be acquaintances, master or servant, is then a trifle and a disturbance. I am the lover of uncontained and immortal beauty.
-from Emerson R. W. (1836). Nature. Reprinted in Ralph Waldo Emerson, Nature and Other Essays (2009). Dover.
Fig. 1. Model for awe as a pathway to mental and physical health. This model shows that awe experiences will lead to the mediators that will lead to better mental and physical-health outcomes. Note that the relationships between awe experiences and mediators, and mediators and outcomes have been empirically identified; the entire pathways have only recently begun to be tested. One-headed arrows suggest directional relationships, and two-headed arrows suggest bidirectionality. DMN = default-mode network; PTSD = posttraumatic stress disorder.

 

 

(The above  text) is a re-post.

Thursday, December 26, 2024

Oliver Sacks - The Machine Stops

 

A slightly edited MindBlog post from 2019 worth another read:

I want to point to a wonderful short essay written by Oliver Sacks before his death from cancer, in which he notes the parallels between the modern world he sees around him and E.M.Forster's prescient classic 1909 short story "The Machine Stops," in which Forster imagined a future in which humans lived in separate cells, communicating only by audio and visual devices (much like today the patrons of a bar at happy hour are more likely to looking at their cells phones than chatting with each other.) A few clips:

I cannot get used to seeing myriads of people in the street peering into little boxes or holding them in front of their faces, walking blithely in the path of moving traffic, totally out of touch with their surroundings. I am most alarmed by such distraction and inattention when I see young parents staring at their cell phones and ignoring their own babies as they walk or wheel them along. Such children, unable to attract their parents’ attention, must feel neglected, and they will surely show the effects of this in the years to come.
I am confronted every day with the complete disappearance of the old civilities. Social life, street life, and attention to people and things around one have largely disappeared, at least in big cities, where a majority of the population is now glued almost without pause to phones or other devices—jabbering, texting, playing games, turning more and more to virtual reality of every sort.
I worry more about the subtle, pervasive draining out of meaning, of intimate contact, from our society and our culture. When I was eighteen, I read Hume for the first time, and I was horrified by the vision he expressed in his eighteenth-century work “A Treatise of Human Nature,” in which he wrote that mankind is “nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement.” As a neurologist, I have seen many patients rendered amnesic by destruction of the memory systems in their brains, and I cannot help feeling that these people, having lost any sense of a past or a future and being caught in a flutter of ephemeral, ever-changing sensations, have in some way been reduced from human beings to Humean ones.
I have only to venture into the streets of my own neighborhood, the West Village, to see such Humean casualties by the thousand: younger people, for the most part, who have grown up in our social-media era, have no personal memory of how things were before, and no immunity to the seductions of digital life. What we are seeing—and bringing on ourselves—resembles a neurological catastrophe on a gigantic scale.
I see science, with its depth of thought, its palpable achievements and potentials, as equally important; and science, good science, is flourishing as never before, though it moves cautiously and slowly, its insights checked by continual self-testing and experimentation. I revere good writing and art and music, but it seems to me that only science, aided by human decency, common sense, farsightedness, and concern for the unfortunate and the poor, offers the world any hope in its present morass. This idea is explicit in Pope Francis’s encyclical and may be practiced not only with vast, centralized technologies but by workers, artisans, and farmers in the villages of the world. Between us, we can surely pull the world through its present crises and lead the way to a happier time ahead. As I face my own impending departure from the world, I have to believe in this—that mankind and our planet will survive, that life will continue, and that this will not be our final hour.

Monday, December 23, 2024

Steven Fry: "AI: A Means to an End or a Means to Our End?"

I have to pass on to MindBlog readers and my future self this  link to a brilliant lecture by Steven Fry. It is an engaging and entertaining analysis, steeped in relevant history and precedents, of ways we might be heading into the future.  Here is just one clip from the piece:

We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.

The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.



Wednesday, December 18, 2024

Sculpting new visual categories into the human brain.

Fascinating work from Iordan et al. (open source) I pass on the abstract and the first paragraph of the article that makes more clear what they are doing.

Abstract

Learning requires changing the brain. This typically occurs through experience, study, or instruction. We report an alternate route for humans to acquire visual knowledge, through the direct sculpting of activity patterns in the human brain that mirror those expected to arise through learning. We used neurofeedback from closed-loop real-time functional MRI to create new categories of visual objects in the brain, without the participants’ explicit awareness. After neural sculpting, participants exhibited behavioral and neural biases for the learned, but not for the control categories. The ability to sculpt new perceptual distinctions into the human brain offers a noninvasive research paradigm for causal testing of the link between neural representations and behavior. As such, beyond its current application to perception, our work potentially has broad relevance for advancing understanding in other domains of cognition such as decision-making, memory, and motor control.

“For if someone were to mold a horse [from clay], it would be reasonable for us on seeing this to say that this previously did not exist but now does exist.”

Mnesarchus of Athens, ca. 100 BCE (1).

Humans continuously learn through experience, both implicitly [e.g., through statistical learning (2, 3)] and explicitly [e.g., through instruction (4, 5)]. Brain imaging has provided insight into the neural correlates of acquiring new knowledge (6) and learning new skills (7). As humans learn to group distinct items into a novel category, neural patterns of activity for those items become more similar to one another and, simultaneously, more distinct from patterns of other categories (810). We hypothesized that we could leverage this process using neurofeedback to help humans acquire perceptual knowledge, separate from experience, study, or instruction. Specifically, sculpting patterns of activity in the human brain (“molding the neural clay”) that mirror those expected to arise through learning of new visual categories may lead to enhanced perception of the sculpted categories (“they now exist”), relative to similar, control categories that were not sculpted. To test this hypothesis, we implemented a closed-loop system for neurofeedback manipulation (1118) using functional MRI (fMRI) measurements recorded from the human brain in real time (every 2 s) and used this method to create new neural categories for complex visual objects. Crucially, in contrast to prior neurofeedback studies that focused exclusively on reinforcing or suppressing existing neural representations (11, 12), in the present work, we sought to use neurofeedback to create novel categories of objects that previously did not exist in the brain; we test whether this process can be used to generate significant changes in the neural representations of complex stimuli in the human cortex, and, as a result, alter perception.

Monday, December 16, 2024

Analysis of the dumbing down of language on social media over time

Di Marco et al. (open source) do a comparative analysis of 8 different social media platforms (Facebook, Twitter, YouTube, Voat, Reddit, Usenet, Gab, and Telegram), focusing on their complexity and temporal shifts in a dataset of ~300 million English comments over 34 years.Their abstract:

Understanding the impact of digital platforms on user behavior presents foundational challenges, including issues related to polarization, misinformation dynamics, and variation in news consumption. Comparative analyses across platforms and over different years can provide critical insights into these phenomena. This study investigates the linguistic characteristics of user comments over 34 y, focusing on their complexity and temporal shifts. Using a dataset of approximately 300 million English comments from eight diverse platforms and topics, we examine user communications’ vocabulary size and linguistic richness and their evolution over time. Our findings reveal consistent patterns of complexity across social media platforms and topics, characterized by a nearly universal reduction in text length, diminished lexical richness, and decreased repetitiveness. Despite these trends, users consistently introduce new words into their comments at a nearly constant rate. This analysis underscores that platforms only partially influence the complexity of user comments but, instead, it reflects a broader pattern of linguistic change driven by social triggers, suggesting intrinsic tendencies in users’ online interactions comparable to historically recognized linguistic hybridization and contamination processes.

Thursday, December 12, 2024

Sustainability of Animal-Sourced Foods - how to deal with farting cows...

I've just read through a number of articles in a Special Feature section of  the most recent issue of PNAS on the future of animal and plant sourced food. After a balanced lead article by Qaim et al., a following article that really caught my eye was "Mitigating methane emissions in grazing beef cattle with a seaweed-based feed additive: Implications for climate-smart agriculture."  First line of it's abstract is "This study suggests that the addition of pelleted bromoform-containing seaweed (Asparagopsis taxiformis) to the diet of grazing beef cattle can potentially reduce enteric methane (CH4) emissions (g/d) by an average of 37.7% without adversely impacting animal performance."

Tuesday, December 10, 2024

Neurons in the amygdala jointly encode the status of interacting individuals

From Lee et al.:

Highlights

-Monkeys infer the social status of conspecifics from videos of dyadic interactions
 
-During fixations, neural populations signal the social status of attended individuals
 
-Neurons in the amygdala jointly encode the status of interacting individuals
 

Summary

Successful integration into a hierarchical social group requires knowledge of the status of each individual and of the rules that govern social interactions within the group. In species that lack morphological indicators of status, social status can be inferred by observing the signals exchanged between individuals. We simulated social interactions between macaques by juxtaposing videos of aggressive and appeasing displays, where two individuals appeared in each other’s line of sight and their displays were timed to suggest the reciprocation of dominant and subordinate signals. Viewers of these videos successfully inferred the social status of the interacting characters. Dominant individuals attracted more social attention from viewers even when they were not engaged in social displays. Neurons in the viewers’ amygdala signaled the status of both the attended (fixated) and the unattended individuals, suggesting that in third-party observers of social interactions, the amygdala jointly signals the status of interacting parties.

 

Monday, December 09, 2024

An AI framework for neural–behavioral modeling

Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:

Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.

Here is the Sani et al. abstract:

Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.

Saturday, December 07, 2024

Battle of the Bots - AI gets creative with Brain Hacks

This post is meant to archive my recent exercise of asking  ChatGPT 4o, Perplexity, Gemini+ advanced,  and Anthropic  (which does not offer links to its responses) to expand my  MindBlog's Brain Hacks post into a more complete essay that offers more background and context.  They responded with slightly different styles and impressive results that required tweaking for only a few small errors.  I thought my more concise text was more inviting, and didn't see anything really new, so my next prompt  (which I had asked ChatGPT to edit and improve) said:

"Your responses are well-constructed, but I’m seeking a deeper layer of insight—something that might challenge conventional perspectives or unveil an overlooked nuance. Can you reframe these ideas or introduce a novel approach that stretches beyond familiar interpretations??"

Here things got interesting.  Click the links above and scroll down to the responses to this final prompt to see the results.  The links most likely are maintained as long as I maintain paying accounts to the bots. (Anthropic does not offer links to its exchanges.)

I pass on here what I think is the most original and striking response, from perplexity. It was right on target,  brimming with new perspectives (some of them the subject of previous posts on my MindBlog), and links to a treasure trove of articles and ideas I was not familiar with from seven sources it provided links to.   Its response also provided a potpourri  or more or less relevant graphic images of brains and models.  

Here is the perplexities text, with links to a few of the seven sources it cites, all of which you can obtain directly from perplexity.:

Reframing our understanding of introspective awareness and cognitive modulation requires us to challenge some fundamental assumptions about the nature of consciousness and self-regulation. Let's explore a more provocative perspective that integrates recent neuroscientific findings with evolutionary theory and cognitive science.

The Illusion of Conscious Control

Contrary to the intuitive notion that our conscious mind is the primary driver of behavior and emotion regulation, emerging evidence suggests that consciousness might be more of an observer than a controller. The brain's emotional and regulatory processes often occur before we become consciously aware of them, suggesting that our sense of volitional control might be, to some extent, an illusion.

Consider the possibility that what we perceive as "introspective awareness" is actually a post-hoc narrative constructed by our brain to make sense of unconscious processes. This perspective challenges the traditional view of top-down emotion regulation and suggests a more complex interplay between conscious and unconscious processes.

The Social Evolution of Consciousness

Recent theories propose that human consciousness evolved primarily for social survival rather than individual cognitive enhancement [ref]. This social evolutionary framework suggests that subjective awareness might lack independent causal influence over psychological processes or actions. Instead, it may serve as a communication tool for social interaction and collective decision-making.

This perspective reframes our understanding of self-awareness and emotion regulation. Rather than viewing these abilities as individual cognitive skills, we might consider them as evolved social tools that facilitate group cohesion and cooperation. The ability to regulate emotions, in this context, becomes less about personal control and more about social signaling and coordination.

Neurotransmitter Systems and Emotion Regulation

Recent research has begun to map the overlap between emotion regulation brain networks and specific neurotransmitter systems [ref] [ref]. This approach offers a novel way to understand the biochemical basis of self-regulation. Instead of focusing solely on cognitive strategies or neural networks, we might conceptualize emotion regulation as a dynamic interplay between neurotransmitter systems.

This biochemical perspective opens up new avenues for
understanding individual differences in emotion regulation abilities. It suggests that what we perceive as "willpower" or "self-control" might be more accurately described as the efficient functioning of specific neurotransmitter systems.

The Myth of the Unified Self

Traditional models of self-awareness and emotion regulation often assume a unified, coherent self that observes and controls mental processes. However, neuroscientific evidence increasingly points to a more fragmented view of consciousness and self-hood.

Instead of a single, unified self, we might conceptualize the mind as a coalition of semi-independent processes, each with its own goals and methods of regulation. This "multiple selves" model challenges our intuitive understanding of self-awareness and suggests that emotion regulation might be more about negotiation between different aspects of our psyche than about top-down control.

Conclusion: Towards a New Paradigm

By integrating these perspectives, we can envision a new paradigm for understanding introspective awareness and cognitive modulation. This model would:

1. Recognize the limited role of conscious volition in emotion regulation
2. Emphasize the social and communicative functions of self-awareness
3. Incorporate the biochemical basis of regulatory processes
4. Embrace a more fragmented view of consciousness and selfhood

This approach challenges us to rethink fundamental concepts like free will, personal responsibility, and the nature of the self. It suggests that effective emotion regulation might be less about strengthening willpower and more about understanding and working with the complex, often unconscious processes that shape our emotional lives. By adopting this more nuanced and provocative perspective, we open up new avenues for research and potentially more effective interventions for emotional and cognitive well-being.



Friday, December 06, 2024

Magnetic soft microrobots for erectile dysfunction therapy!

I can't resist passing on these abstracts describing work  reported by  a large number of researchers at South China University of Technology, Guangzhou International Campus. I wonder if the results obtained with both both rat and beagle ED models will eventually prove relevant to 82 year old retired professors?

Significance

Erectile dysfunction (ED), a prevalent form of sexual dysfunction, significantly affects fertility and quality of life. Mesenchymal stromal cell (MSC) therapies show promise for ED treatment, yet challenges such as low tissue retention and poor MSC survival in corpus cavernosum tissue limit their efficacy. In this study, we introduce a shape-adaptive and reactive oxygen species (ROS)-scavenging microrobot designed to overcome the challenges of vascularization and optimize MSC delivery. The microrobot enhances MSC retention and survival in corpus cavernosum tissue. In both rat and beagle models of ED, treatment with MSC-laden microrobots (MSC-Rob) promoted restored erectile function. Our results indicate that ED could be reversed via this approach, providing a promising outlook for its feasibility in human applications.

Abstract

Erectile dysfunction (ED) is a major threat to male fertility and quality of life, and mesenchymal stromal cells (MSCs) are a promising therapeutic option. However, therapeutic outcomes are compromised by low MSC retention and survival rates in corpus cavernosum tissue. Here, we developed an innovative magnetic soft microrobot comprising an ultrasoft hydrogel microsphere embedded with a magnetic nanoparticle chain for MSC delivery. This design also features phenylboronic acid groups for scavenging reactive oxygen species (ROS). With a Young’s modulus of less than 1 kPa, the ultrasoft microrobot adapts its shape within narrow blood vessels, ensuring a uniform distribution of MSCs within the corpus cavernosum. Our findings showed that compared with traditional MSC injections, the MSC delivery microrobot (MSC-Rob) significantly enhanced MSC retention and survival. In both rat and beagle ED models, MSC-Rob treatment accelerated the repair of corpus cavernosum tissue and restored erectile function. Single-cell RNA sequencing (scRNA-seq) revealed that MSC-Rob treatment facilitates nerve and blood vessel regeneration in the corpus cavernosum by increasing the presence of regenerative macrophages. Overall, our MSC-Rob not only advances the clinical application of MSCs for ED therapy but also broadens the scope of microrobots for other cell therapies.

 

Thursday, December 05, 2024

The Future of Warfare

Passing on an article from today's WSJ that I want to save, using MindBlog as my personal archive: 

OpenAI Forges Tie-Up To Defense Industry

OpenAI , the artificial-intelligence company behind Chat-GPT, is getting into the business of war.

The world’s most valuable AI company has agreed to work with Anduril Industries, a leading defense-tech startup, to add its technology to systems the U.S. military uses to counter drone attacks. The partnership, which the companies announced Wednesday, marks OpenAI’s deepest involvement yet with the Defense Department and its first tie-up with a commercial weapons maker.

It is the latest example of Silicon Valley’s dramatic turn from shunning the Pentagon a few years ago to forging deeper ties with the national security complex.

OpenAI, valued at more than $150 billion, previously barred its AI from being used in military and warfare. In January, it changed its policies to allow some collaborations with the military.

While the company still prohibits the use of its technology in offensive weapons, it has made deals with the Defense Department for cybersecurity work and other projects. This year, OpenAI added former National Security Agency chief Paul Nakasone to its board and hired former Defense Department official Sasha Baker to create a team focused on national-security policy.

Other tech companies are making similar moves, arguing that the U.S. must treat AI technology as a strategic asset to bolster national security against countries like China. Last month, startup Anthropic said it would give access to its AI to the U.S. military through a partnership with Palantir Technologies.

OpenAI will incorporate its tech into Anduril’s counterdrone systems software, the companies said.

The Anduril systems detect, assess and track unmanned aircraft. If a threatening drone is identified, militaries can use electronic jamming, drones and other means to take it down.

The AI could improve the accuracy and speed of detecting and responding to drones, putting fewer people in harm’s way, Anduril said.

The Anduril deal ties OpenAI to some tech leaders who have espoused conservative

ideals and backed Donald Trump. Anduril co-founder Palmer Luckey was an early and vocal Trump supporter from the tech industry. Luckey’s sister is married to Matt Gaetz, Trump’s pick to lead the Justice Department before he withdrew from consideration.

Luckey is also close to Trump’s ally, Elon Musk.

Musk has praised Luckey’s entrepreneurship and encouraged him to join the Trump transition team.

Luckey has, at times, fashioned himself as a younger Musk and references Musk as a pioneer in selling startup technology to the Pentagon.

The alliance between Anduril and OpenAI might also help buffer the AI company’s chief executive, Sam Altman, from possible backlash from Musk, who has openly disparaged Altman and sued his company. Musk was a co-founder of OpenAI but stepped away from the company in 2018 after clashing with Altman over the company’s direction. Last year. Musk founded a rival AI lab, x.AI.

At an event on Wednesday, Altman said he didn’t think Musk would use his close relationship with Trump to undermine rivals.

“It would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors,” Altman said at the New York Times’s DealBook conference in New York City. “I don’t think people would tolerate that. I don’t think Elon would do it.”

Anduril is leading the push by venture-backed startups to sell high-tech, AI-powered systems to replace traditional tanks and attack helicopters. The company sells weapons to militaries around the world and AI software that enables the weapons to act autonomously.

Anduril Chief Executive Officer Brian Schimpf said in a statement that adding OpenAI technology to Anduril systems will “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”

Anduril, valued at $14 billion, is one of the few success stories among a crowd of fledgling defense startups. In November, the company announced a $200 million contract to provide the U.S. Marine Corps with its counterdrone system. The company said the Defense Department uses the counterdrone systems to protect military installations.

As part of this agreement, OpenAI’s technology won’t be used with Anduril’s other weapons systems, the companies said.

Altman said in a statement that his company wants to “ensure the technology upholds democratic values.”

The companies declined to comment on the financial terms of the partnership.

Technology entrepreneurs, backed by billions of dollars in venture capital, have bet that future conflicts will hinge on large numbers of small, AIpowered autonomous systems to attack and defend. Defense--tech companies and some Pentagon leaders say the U.S. military needs better AI for a potential conflict with China and other sophisticated adversaries.

AI has proved increasingly important for keeping drones in the air after the rise of electronic warfare, which uses jammers to block GPS signals and radio frequencies that drones use to fly. AI can also help soldiers and military chiefs filter large amounts of battlefield data.

Wading deeper into defense opens another source of revenue for OpenAI, which seeks to evolve from the nonprofit lab of its roots to a moneymaking leader in the AI industry. The computing costs to develop and operate AI models are exorbitant, and the company is losing billions of dollars a year.