Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, December 23, 2024

Steven Fry: "AI: A Means to an End or a Means to Our End?"

I have to pass on to MindBlog readers and my future self this  link to a brilliant lecture by Steven Fry. It is an engaging and entertaining analysis, steeped in relevant history and precedents, of ways we might be heading into the future.  Here is just one clip from the piece:

We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.

The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.



Monday, December 09, 2024

An AI framework for neural–behavioral modeling

Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:

Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.

Here is the Sani et al. abstract:

Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.

Thursday, December 05, 2024

The Future of Warfare

Passing on an article from today's WSJ that I want to save, using MindBlog as my personal archive: 

OpenAI Forges Tie-Up To Defense Industry

OpenAI , the artificial-intelligence company behind Chat-GPT, is getting into the business of war.

The world’s most valuable AI company has agreed to work with Anduril Industries, a leading defense-tech startup, to add its technology to systems the U.S. military uses to counter drone attacks. The partnership, which the companies announced Wednesday, marks OpenAI’s deepest involvement yet with the Defense Department and its first tie-up with a commercial weapons maker.

It is the latest example of Silicon Valley’s dramatic turn from shunning the Pentagon a few years ago to forging deeper ties with the national security complex.

OpenAI, valued at more than $150 billion, previously barred its AI from being used in military and warfare. In January, it changed its policies to allow some collaborations with the military.

While the company still prohibits the use of its technology in offensive weapons, it has made deals with the Defense Department for cybersecurity work and other projects. This year, OpenAI added former National Security Agency chief Paul Nakasone to its board and hired former Defense Department official Sasha Baker to create a team focused on national-security policy.

Other tech companies are making similar moves, arguing that the U.S. must treat AI technology as a strategic asset to bolster national security against countries like China. Last month, startup Anthropic said it would give access to its AI to the U.S. military through a partnership with Palantir Technologies.

OpenAI will incorporate its tech into Anduril’s counterdrone systems software, the companies said.

The Anduril systems detect, assess and track unmanned aircraft. If a threatening drone is identified, militaries can use electronic jamming, drones and other means to take it down.

The AI could improve the accuracy and speed of detecting and responding to drones, putting fewer people in harm’s way, Anduril said.

The Anduril deal ties OpenAI to some tech leaders who have espoused conservative

ideals and backed Donald Trump. Anduril co-founder Palmer Luckey was an early and vocal Trump supporter from the tech industry. Luckey’s sister is married to Matt Gaetz, Trump’s pick to lead the Justice Department before he withdrew from consideration.

Luckey is also close to Trump’s ally, Elon Musk.

Musk has praised Luckey’s entrepreneurship and encouraged him to join the Trump transition team.

Luckey has, at times, fashioned himself as a younger Musk and references Musk as a pioneer in selling startup technology to the Pentagon.

The alliance between Anduril and OpenAI might also help buffer the AI company’s chief executive, Sam Altman, from possible backlash from Musk, who has openly disparaged Altman and sued his company. Musk was a co-founder of OpenAI but stepped away from the company in 2018 after clashing with Altman over the company’s direction. Last year. Musk founded a rival AI lab, x.AI.

At an event on Wednesday, Altman said he didn’t think Musk would use his close relationship with Trump to undermine rivals.

“It would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors,” Altman said at the New York Times’s DealBook conference in New York City. “I don’t think people would tolerate that. I don’t think Elon would do it.”

Anduril is leading the push by venture-backed startups to sell high-tech, AI-powered systems to replace traditional tanks and attack helicopters. The company sells weapons to militaries around the world and AI software that enables the weapons to act autonomously.

Anduril Chief Executive Officer Brian Schimpf said in a statement that adding OpenAI technology to Anduril systems will “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”

Anduril, valued at $14 billion, is one of the few success stories among a crowd of fledgling defense startups. In November, the company announced a $200 million contract to provide the U.S. Marine Corps with its counterdrone system. The company said the Defense Department uses the counterdrone systems to protect military installations.

As part of this agreement, OpenAI’s technology won’t be used with Anduril’s other weapons systems, the companies said.

Altman said in a statement that his company wants to “ensure the technology upholds democratic values.”

The companies declined to comment on the financial terms of the partnership.

Technology entrepreneurs, backed by billions of dollars in venture capital, have bet that future conflicts will hinge on large numbers of small, AIpowered autonomous systems to attack and defend. Defense--tech companies and some Pentagon leaders say the U.S. military needs better AI for a potential conflict with China and other sophisticated adversaries.

AI has proved increasingly important for keeping drones in the air after the rise of electronic warfare, which uses jammers to block GPS signals and radio frequencies that drones use to fly. AI can also help soldiers and military chiefs filter large amounts of battlefield data.

Wading deeper into defense opens another source of revenue for OpenAI, which seeks to evolve from the nonprofit lab of its roots to a moneymaking leader in the AI industry. The computing costs to develop and operate AI models are exorbitant, and the company is losing billions of dollars a year.