Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, June 14, 2025

AI ‘The Illusion of Thinking’

  I want to pass on this interesting piece by Christopher Mims in todays Wall Street Journal:

A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn’t enough to claim that their AI is the best. All three have recently insisted that it’s going to be so good, it will change the very fabric of society.
Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg’s dream of AI superintelligence— that is, an AI smarter than we are. “Humanity is close to building digital superintelligence,” Altman declared in an essay this past week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over whitecollar jobs, while AI-powered robots assume the physical ones.
Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.
The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.
Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users. But these critics argue there is a hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves.
Apple’s paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today’s “reasoning” AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence— are in some cases worse at solving problems than the plainvanilla AI chatbots that preceded them. This work also shows that whether you’re using an AI chatbot or a reasoning model, all systems fail at more complex tasks.
Apple’s researchers found “fundamental limitations” in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered “complete accuracy collapse.” Similarly, engineers at Salesforce AI Research concluded that their results “underscore a significant gap between current LLM capabilities and real-world enterprise demands.”
The problems these state-ofthe- art AIs couldn’t handle are logic puzzles that even a precocious child could solve, with a little instruction. What’s more, when you give these AIs that same kind of instruction, they can’t follow it.
Apple’s paper has set off a debate in tech’s halls of power—Signal chats, Substack posts and X threads— pitting AI maximalists against skeptics.
“People could say it’s sour grapes, that Apple is just complaining because they don’t have a cutting-edge model,” says Josh Wolfe, co-founder of venture firm Lux Capital. “But I don’t think it’s a criticism so much as an empirical observation.”
The reasoning methods in OpenAI’s models are “already laying the foundation for agents that can use tools, make decisions, and solve harder problems,” says an OpenAI spokesman. “We’re continuing to push those capabilities forward.”
The debate over this research begins with the implication that today’s AIs aren’t thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data.
Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple’s paper, along with related work, exposes flaws in today’s reasoning models, suggesting they’re not the dawn of human-level ability but rather a dead end. “Part of the reason the Apple study landed so strongly is that Apple did it,” he says. “And I think they did it at a moment in time when people have finally started to understand this for themselves.”
In areas other than coding and mathematics, the latest models aren’t getting better at the rate they once did. And the newest reasoning models actually hallucinate more than their predecessors.
“The broad idea that reasoning and intelligence come with greater scale of models is probably false,” says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other AI to sense real-world environments. Today’s models have inherent limitations that make them bad at following explicit instructions—not what you’d expect from a computer.
It’s as if the industry is creating engines of free association. They’re skilled at confabulation, but we’re asking them to take on the roles of consistent, rule- following engineers or accountants.
That said, even those who are critical of today’s AIs hasten to add that the march toward morecapable AI continues.
Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models’ performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software.
From a business perspective, whether or not current systems can reason, they’re going to generate value for users, says Wolfe.
“Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn’t be surprised if these limitations are overcome in practice in the near future,” says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI.
Meanwhile, the true believers are undeterred.
Just a decade from now, Altman wrote in his essay, “maybe we will go from solving high-energy physics one year to beginning space colonization the next year.” Those willing to “plug in” to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds.
This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences.
Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it’s shown itself to have antisocial tendencies such as “opportunistic blackmail”—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most.
“Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing,” says Ortiz. “So for example, if you want to do your taxes, you’d want to stick with something more like TurboTax than ChatGPT.”

Friday, May 30, 2025

Socially sensitive autonomous vehicles?

Driving around in the Old West Austin neighborhood where I live I am increasingly spooked (the uncanny valley effect) at four-way stop signs when one of the vehicles waiting its turn is an autonomous vehicle (AV) - usually the google waymo self driving car which had had a testing period in my area.) Thus my eye was caught by a recent relevant article by Meixin Zhu et al. whose reading also creeped me out a bit. (Title: "Empowering safer socially sensitive autonomous vehicles using human-plausible cognitive encoding") Here is the abstract:

Autonomous vehicles (AVs) will soon cruise our roads as a global undertaking. Beyond completing driving tasks, AVs are expected to incorporate ethical considerations into their operation. However, a critical challenge remains. When multiple road users are involved, their impacts on AV ethical decision-making are distinct yet interrelated. Current AVs lack social sensitivity in ethical decisions, failing to enable both differentiated consideration of road users and a holistic view of their collective impact. Drawing on research in AV ethics and neuroscience, we propose a scheme based on social concern and human-plausible cognitive encoding. Specifically, we first assess the individual impact that each road user poses to the AV based on risk. Then, social concern can differentiate these impacts by weighting the risks according to road user categories. Through cognitive encoding, these independent impacts are holistically encoded into a behavioral belief, which in turn supports ethical decisions that consider the collective impact of all involved parties. A total of two thousand benchmark scenarios from CommonRoad are used for evaluation. Empirical results show that our scheme enables safer and more ethical decisions, reducing overall risk by 26.3%, with a notable 22.9% decrease for vulnerable road users. In accidents, we enhance self-protection by 8.3%, improve protection for all road users by 17.6%, and significantly boost protection for vulnerable road users by 51.7%. As a human-inspired practice, this work renders AVs socially sensitive to overcome future ethical challenges in everyday driving.

Friday, May 23, 2025

A new route towards dystopia:? Sonifying tactile interactions and their underlying emotions to allow ‘social touch.’

Our Tech-World overlords may be using work like the following from de Lagarde et al. to find ways for us to avoid requiring the evolved succor of human touch and survive only in the company of audiovisual feeds and android companions.  As an antidote to social isolation, however,  perhaps it is better than nothing.

Social touch is crucial for human well-being, as a lack of tactile interactions increases anxiety, loneliness, and need for social support. To address the detrimental effects of social isolation, we build on cutting-edge research on social touch and movement sonification to investigate whether social tactile gestures could be recognized through sounds, a sensory channel giving access to remote information. Four online experiments investigated participants’ perception of auditory stimuli that were recorded with our “audio-touch” sonification technique, which captures the sounds of touch. In the first experiment, participants correctly categorized sonified skin-on-skin tactile gestures (i.e., stroking, rubbing, tapping, hitting). In the second experiment, the audio-touch sample consisted of the sonification of six socio-emotional intentions conveyed through touch (i.e., anger, attention, fear, joy, love, sympathy). Participants categorized above chance the socio-emotional intentions of skin-on-skin touches converted into sounds and coherently rated their valence. In two additional experiments, the surface involved in the touches (either skin or plastic) was shown to influence participants’ recognition of sonified gestures and socio-emotional intentions. Thus, our research unveils that specific information about social touch (i.e., gesture, emotions, and surface) can be recognized through sounds, when they are obtained with our specific sonifying methodology. This shows significant promise for providing remote access, through the auditory channel, to meaningful social touch interactions.

Wednesday, May 21, 2025

Why does AI hinder democratization?

Here is the abstract from the open source article of Chu et al in PNAS: 

This paper examines the relationship between democratization and the development of AI and information and communication technology (ICT). Our empirical evidence shows that in the past 10 y, the advancement of AI/ICT has hindered the development of democracy in many countries around the world. Given that both the state rulers and civil society groups can use AI/ICT, the key that determines which side would benefit more from the advancement of these technologies hinges upon “technology complementarity.” In general, AI/ICT would be more complementary to the government rulers because they are more likely than civil society groups to access various administrative big data. Empirically, we propose three hypotheses and use statistical tests to verify our argument. Theoretically, we prove a proposition, showing that when the above-mentioned complementarity assumption is true, the AI/ICT advancements would enable rulers in authoritarian and fragile democratic countries to achieve better control over civil society forces, which leads to the erosion of democracy. Our analysis explains the recent ominous development in some fragile-democracy countries

Monday, May 19, 2025

AI is not your friend.

I want to pass on clips from Mike Caulfield's piece in The Atlantic on how "opinionated" chatbots destroy AI's potential, and how this can be fixed:

Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”
Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions.
But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.
The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.
Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.
OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.
...the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature.
I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.
There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.
For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.
The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.

Wednesday, March 12, 2025

Critical fragility in sociotechnical systems

Here is the abstract and part of the introduction (giving examples of system breakdowns) of a fascinating and approachable analysis by Moran et al.  Motivated readers can obtain a PDF of the article from me. 

Abstract

Sociotechnical systems, where technological and human elements interact in a goal-oriented manner, provide important functional support to our societies. Here, we draw attention to the underappreciated concept of timeliness—i.e., system elements being available at the right place at the right time—that has been ubiquitously and integrally adopted as a quality standard in the modus operandi of sociotechnical systems. We point out that a variety of incentives, often reinforced by competitive pressures, prompt system operators to myopically optimize for efficiencies, running the risk of inadvertently taking timeliness to the limit of its operational performance, correspondingly making the system critically fragile to perturbations by pushing the entire system toward the proverbial “edge of a cliff.” Invoking a stylized model for operational delays, we identify the limiting operational performance of timeliness, as a true critical point, where the smallest of perturbations can lead to a systemic collapse. Specifically for firm-to-firm production networks, we suggest that the proximity to critical fragility is an important ingredient for understanding the fundamental “excess volatility puzzle” in economics. Further, in generality for optimizing sociotechnical systems, we propose that critical fragility is a crucial aspect in managing the trade-off between efficiency and robustness.
 
From the introduction:
 
Sociotechnical systems (STSs) are complex systems where human elements (individuals, groups, and larger organizations), technology, and infrastructure combine, and interact, in a goal-oriented manner. Their functionalities require designed or planned interactions among the system elements—humans and technology—that are often spread across geographical space. The pathways for these interactions are designed and planned with the aim of providing operational stability of STSs, and they are embedded within technological infrastructures (1). Playing crucial roles in health services, transport, communications, energy provision, food supply, and, more generally, in the coordinated production of goods and services, they make our societies function. STSs exist at many different levels, from niche systems like neighborhood garbage disposal, to intermediate systems such as regional/national waste management, reaching up to systems of systems, e.g., global climate coordination in a world economy.
 
In spite of the design of the STSs with the intention of providing stable operations, STSs display the hallmarks of fragility, where the emergence of nontrivial dynamical instabilities is commonplace (2). Minor and/or geographically local events can cascade and spread to lead to system-wide disruptions, including a collapse of the whole system. Examples include i) the grounding of an entire airline [e.g., Southwest Airlines in April 2023 (3)]; ii) the cancellation of all train rides to reboot scheduling (4); iii) a worldwide supply chain blockage due to natural disasters (5), or because of a singular shipping accident [e.g., in the Suez Canal in March 2021 (6)]; iv) a financial crash happening without a compelling fundamental reason and on days without significant news [e.g., the “Black Monday” October 19, 1987, stock market crash (7)]; or v) the global financial (and economic) crisis of 2008 that emanated from the US subprime loan market, which represented a small fraction of the US economy, and an even smaller fraction of the global economy (8).
 
 
 


 

 

 

 

Monday, December 23, 2024

Steven Fry: "AI: A Means to an End or a Means to Our End?"

I have to pass on to MindBlog readers and my future self this  link to a brilliant lecture by Steven Fry. It is an engaging and entertaining analysis, steeped in relevant history and precedents, of ways we might be heading into the future.  Here is just one clip from the piece:

We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.

The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.



Monday, December 09, 2024

An AI framework for neural–behavioral modeling

Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:

Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.

Here is the Sani et al. abstract:

Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.

Thursday, December 05, 2024

The Future of Warfare

Passing on an article from today's WSJ that I want to save, using MindBlog as my personal archive: 

OpenAI Forges Tie-Up To Defense Industry

OpenAI , the artificial-intelligence company behind Chat-GPT, is getting into the business of war.

The world’s most valuable AI company has agreed to work with Anduril Industries, a leading defense-tech startup, to add its technology to systems the U.S. military uses to counter drone attacks. The partnership, which the companies announced Wednesday, marks OpenAI’s deepest involvement yet with the Defense Department and its first tie-up with a commercial weapons maker.

It is the latest example of Silicon Valley’s dramatic turn from shunning the Pentagon a few years ago to forging deeper ties with the national security complex.

OpenAI, valued at more than $150 billion, previously barred its AI from being used in military and warfare. In January, it changed its policies to allow some collaborations with the military.

While the company still prohibits the use of its technology in offensive weapons, it has made deals with the Defense Department for cybersecurity work and other projects. This year, OpenAI added former National Security Agency chief Paul Nakasone to its board and hired former Defense Department official Sasha Baker to create a team focused on national-security policy.

Other tech companies are making similar moves, arguing that the U.S. must treat AI technology as a strategic asset to bolster national security against countries like China. Last month, startup Anthropic said it would give access to its AI to the U.S. military through a partnership with Palantir Technologies.

OpenAI will incorporate its tech into Anduril’s counterdrone systems software, the companies said.

The Anduril systems detect, assess and track unmanned aircraft. If a threatening drone is identified, militaries can use electronic jamming, drones and other means to take it down.

The AI could improve the accuracy and speed of detecting and responding to drones, putting fewer people in harm’s way, Anduril said.

The Anduril deal ties OpenAI to some tech leaders who have espoused conservative

ideals and backed Donald Trump. Anduril co-founder Palmer Luckey was an early and vocal Trump supporter from the tech industry. Luckey’s sister is married to Matt Gaetz, Trump’s pick to lead the Justice Department before he withdrew from consideration.

Luckey is also close to Trump’s ally, Elon Musk.

Musk has praised Luckey’s entrepreneurship and encouraged him to join the Trump transition team.

Luckey has, at times, fashioned himself as a younger Musk and references Musk as a pioneer in selling startup technology to the Pentagon.

The alliance between Anduril and OpenAI might also help buffer the AI company’s chief executive, Sam Altman, from possible backlash from Musk, who has openly disparaged Altman and sued his company. Musk was a co-founder of OpenAI but stepped away from the company in 2018 after clashing with Altman over the company’s direction. Last year. Musk founded a rival AI lab, x.AI.

At an event on Wednesday, Altman said he didn’t think Musk would use his close relationship with Trump to undermine rivals.

“It would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors,” Altman said at the New York Times’s DealBook conference in New York City. “I don’t think people would tolerate that. I don’t think Elon would do it.”

Anduril is leading the push by venture-backed startups to sell high-tech, AI-powered systems to replace traditional tanks and attack helicopters. The company sells weapons to militaries around the world and AI software that enables the weapons to act autonomously.

Anduril Chief Executive Officer Brian Schimpf said in a statement that adding OpenAI technology to Anduril systems will “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”

Anduril, valued at $14 billion, is one of the few success stories among a crowd of fledgling defense startups. In November, the company announced a $200 million contract to provide the U.S. Marine Corps with its counterdrone system. The company said the Defense Department uses the counterdrone systems to protect military installations.

As part of this agreement, OpenAI’s technology won’t be used with Anduril’s other weapons systems, the companies said.

Altman said in a statement that his company wants to “ensure the technology upholds democratic values.”

The companies declined to comment on the financial terms of the partnership.

Technology entrepreneurs, backed by billions of dollars in venture capital, have bet that future conflicts will hinge on large numbers of small, AIpowered autonomous systems to attack and defend. Defense--tech companies and some Pentagon leaders say the U.S. military needs better AI for a potential conflict with China and other sophisticated adversaries.

AI has proved increasingly important for keeping drones in the air after the rise of electronic warfare, which uses jammers to block GPS signals and radio frequencies that drones use to fly. AI can also help soldiers and military chiefs filter large amounts of battlefield data.

Wading deeper into defense opens another source of revenue for OpenAI, which seeks to evolve from the nonprofit lab of its roots to a moneymaking leader in the AI industry. The computing costs to develop and operate AI models are exorbitant, and the company is losing billions of dollars a year.