Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Friday, May 23, 2025

A new route towards dystopia:? Sonifying tactile interactions and their underlying emotions to allow ‘social touch.’

Our Tech-World overlords may be using work like the following from de Lagarde et al. to find ways for us to avoid requiring the evolved succor of human touch and survive only in the company of audiovisual feeds and android companions.  As an antidote to social isolation, however,  perhaps it is better than nothing.

Social touch is crucial for human well-being, as a lack of tactile interactions increases anxiety, loneliness, and need for social support. To address the detrimental effects of social isolation, we build on cutting-edge research on social touch and movement sonification to investigate whether social tactile gestures could be recognized through sounds, a sensory channel giving access to remote information. Four online experiments investigated participants’ perception of auditory stimuli that were recorded with our “audio-touch” sonification technique, which captures the sounds of touch. In the first experiment, participants correctly categorized sonified skin-on-skin tactile gestures (i.e., stroking, rubbing, tapping, hitting). In the second experiment, the audio-touch sample consisted of the sonification of six socio-emotional intentions conveyed through touch (i.e., anger, attention, fear, joy, love, sympathy). Participants categorized above chance the socio-emotional intentions of skin-on-skin touches converted into sounds and coherently rated their valence. In two additional experiments, the surface involved in the touches (either skin or plastic) was shown to influence participants’ recognition of sonified gestures and socio-emotional intentions. Thus, our research unveils that specific information about social touch (i.e., gesture, emotions, and surface) can be recognized through sounds, when they are obtained with our specific sonifying methodology. This shows significant promise for providing remote access, through the auditory channel, to meaningful social touch interactions.

Wednesday, May 21, 2025

Why does AI hinder democratization?

Here is the abstract from the open source article of Chu et al in PNAS: 

This paper examines the relationship between democratization and the development of AI and information and communication technology (ICT). Our empirical evidence shows that in the past 10 y, the advancement of AI/ICT has hindered the development of democracy in many countries around the world. Given that both the state rulers and civil society groups can use AI/ICT, the key that determines which side would benefit more from the advancement of these technologies hinges upon “technology complementarity.” In general, AI/ICT would be more complementary to the government rulers because they are more likely than civil society groups to access various administrative big data. Empirically, we propose three hypotheses and use statistical tests to verify our argument. Theoretically, we prove a proposition, showing that when the above-mentioned complementarity assumption is true, the AI/ICT advancements would enable rulers in authoritarian and fragile democratic countries to achieve better control over civil society forces, which leads to the erosion of democracy. Our analysis explains the recent ominous development in some fragile-democracy countries

Monday, May 19, 2025

AI is not your friend.

I want to pass on clips from Mike Caulfield's piece in The Atlantic on how "opinionated" chatbots destroy AI's potential, and how this can be fixed:

Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”
Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions.
But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.
The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.
Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.
OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.
...the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature.
I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.
There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.
For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.
The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.

Wednesday, March 12, 2025

Critical fragility in sociotechnical systems

Here is the abstract and part of the introduction (giving examples of system breakdowns) of a fascinating and approachable analysis by Moran et al.  Motivated readers can obtain a PDF of the article from me. 

Abstract

Sociotechnical systems, where technological and human elements interact in a goal-oriented manner, provide important functional support to our societies. Here, we draw attention to the underappreciated concept of timeliness—i.e., system elements being available at the right place at the right time—that has been ubiquitously and integrally adopted as a quality standard in the modus operandi of sociotechnical systems. We point out that a variety of incentives, often reinforced by competitive pressures, prompt system operators to myopically optimize for efficiencies, running the risk of inadvertently taking timeliness to the limit of its operational performance, correspondingly making the system critically fragile to perturbations by pushing the entire system toward the proverbial “edge of a cliff.” Invoking a stylized model for operational delays, we identify the limiting operational performance of timeliness, as a true critical point, where the smallest of perturbations can lead to a systemic collapse. Specifically for firm-to-firm production networks, we suggest that the proximity to critical fragility is an important ingredient for understanding the fundamental “excess volatility puzzle” in economics. Further, in generality for optimizing sociotechnical systems, we propose that critical fragility is a crucial aspect in managing the trade-off between efficiency and robustness.
 
From the introduction:
 
Sociotechnical systems (STSs) are complex systems where human elements (individuals, groups, and larger organizations), technology, and infrastructure combine, and interact, in a goal-oriented manner. Their functionalities require designed or planned interactions among the system elements—humans and technology—that are often spread across geographical space. The pathways for these interactions are designed and planned with the aim of providing operational stability of STSs, and they are embedded within technological infrastructures (1). Playing crucial roles in health services, transport, communications, energy provision, food supply, and, more generally, in the coordinated production of goods and services, they make our societies function. STSs exist at many different levels, from niche systems like neighborhood garbage disposal, to intermediate systems such as regional/national waste management, reaching up to systems of systems, e.g., global climate coordination in a world economy.
 
In spite of the design of the STSs with the intention of providing stable operations, STSs display the hallmarks of fragility, where the emergence of nontrivial dynamical instabilities is commonplace (2). Minor and/or geographically local events can cascade and spread to lead to system-wide disruptions, including a collapse of the whole system. Examples include i) the grounding of an entire airline [e.g., Southwest Airlines in April 2023 (3)]; ii) the cancellation of all train rides to reboot scheduling (4); iii) a worldwide supply chain blockage due to natural disasters (5), or because of a singular shipping accident [e.g., in the Suez Canal in March 2021 (6)]; iv) a financial crash happening without a compelling fundamental reason and on days without significant news [e.g., the “Black Monday” October 19, 1987, stock market crash (7)]; or v) the global financial (and economic) crisis of 2008 that emanated from the US subprime loan market, which represented a small fraction of the US economy, and an even smaller fraction of the global economy (8).
 
 
 


 

 

 

 

Monday, December 23, 2024

Steven Fry: "AI: A Means to an End or a Means to Our End?"

I have to pass on to MindBlog readers and my future self this  link to a brilliant lecture by Steven Fry. It is an engaging and entertaining analysis, steeped in relevant history and precedents, of ways we might be heading into the future.  Here is just one clip from the piece:

We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.

The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.



Monday, December 09, 2024

An AI framework for neural–behavioral modeling

Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:

Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.

Here is the Sani et al. abstract:

Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.

Thursday, December 05, 2024

The Future of Warfare

Passing on an article from today's WSJ that I want to save, using MindBlog as my personal archive: 

OpenAI Forges Tie-Up To Defense Industry

OpenAI , the artificial-intelligence company behind Chat-GPT, is getting into the business of war.

The world’s most valuable AI company has agreed to work with Anduril Industries, a leading defense-tech startup, to add its technology to systems the U.S. military uses to counter drone attacks. The partnership, which the companies announced Wednesday, marks OpenAI’s deepest involvement yet with the Defense Department and its first tie-up with a commercial weapons maker.

It is the latest example of Silicon Valley’s dramatic turn from shunning the Pentagon a few years ago to forging deeper ties with the national security complex.

OpenAI, valued at more than $150 billion, previously barred its AI from being used in military and warfare. In January, it changed its policies to allow some collaborations with the military.

While the company still prohibits the use of its technology in offensive weapons, it has made deals with the Defense Department for cybersecurity work and other projects. This year, OpenAI added former National Security Agency chief Paul Nakasone to its board and hired former Defense Department official Sasha Baker to create a team focused on national-security policy.

Other tech companies are making similar moves, arguing that the U.S. must treat AI technology as a strategic asset to bolster national security against countries like China. Last month, startup Anthropic said it would give access to its AI to the U.S. military through a partnership with Palantir Technologies.

OpenAI will incorporate its tech into Anduril’s counterdrone systems software, the companies said.

The Anduril systems detect, assess and track unmanned aircraft. If a threatening drone is identified, militaries can use electronic jamming, drones and other means to take it down.

The AI could improve the accuracy and speed of detecting and responding to drones, putting fewer people in harm’s way, Anduril said.

The Anduril deal ties OpenAI to some tech leaders who have espoused conservative

ideals and backed Donald Trump. Anduril co-founder Palmer Luckey was an early and vocal Trump supporter from the tech industry. Luckey’s sister is married to Matt Gaetz, Trump’s pick to lead the Justice Department before he withdrew from consideration.

Luckey is also close to Trump’s ally, Elon Musk.

Musk has praised Luckey’s entrepreneurship and encouraged him to join the Trump transition team.

Luckey has, at times, fashioned himself as a younger Musk and references Musk as a pioneer in selling startup technology to the Pentagon.

The alliance between Anduril and OpenAI might also help buffer the AI company’s chief executive, Sam Altman, from possible backlash from Musk, who has openly disparaged Altman and sued his company. Musk was a co-founder of OpenAI but stepped away from the company in 2018 after clashing with Altman over the company’s direction. Last year. Musk founded a rival AI lab, x.AI.

At an event on Wednesday, Altman said he didn’t think Musk would use his close relationship with Trump to undermine rivals.

“It would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors,” Altman said at the New York Times’s DealBook conference in New York City. “I don’t think people would tolerate that. I don’t think Elon would do it.”

Anduril is leading the push by venture-backed startups to sell high-tech, AI-powered systems to replace traditional tanks and attack helicopters. The company sells weapons to militaries around the world and AI software that enables the weapons to act autonomously.

Anduril Chief Executive Officer Brian Schimpf said in a statement that adding OpenAI technology to Anduril systems will “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”

Anduril, valued at $14 billion, is one of the few success stories among a crowd of fledgling defense startups. In November, the company announced a $200 million contract to provide the U.S. Marine Corps with its counterdrone system. The company said the Defense Department uses the counterdrone systems to protect military installations.

As part of this agreement, OpenAI’s technology won’t be used with Anduril’s other weapons systems, the companies said.

Altman said in a statement that his company wants to “ensure the technology upholds democratic values.”

The companies declined to comment on the financial terms of the partnership.

Technology entrepreneurs, backed by billions of dollars in venture capital, have bet that future conflicts will hinge on large numbers of small, AIpowered autonomous systems to attack and defend. Defense--tech companies and some Pentagon leaders say the U.S. military needs better AI for a potential conflict with China and other sophisticated adversaries.

AI has proved increasingly important for keeping drones in the air after the rise of electronic warfare, which uses jammers to block GPS signals and radio frequencies that drones use to fly. AI can also help soldiers and military chiefs filter large amounts of battlefield data.

Wading deeper into defense opens another source of revenue for OpenAI, which seeks to evolve from the nonprofit lab of its roots to a moneymaking leader in the AI industry. The computing costs to develop and operate AI models are exorbitant, and the company is losing billions of dollars a year.