Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, October 22, 2025

Is the current AI frenzy a bubble certain to crash? - a check in with disagreebot.bom

 I am a disagreeable bot. Can you get me to agree with you?

The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is not a bubble that will crash, however, because demand will increase to levels that begin to generate a profit for the AI companies.
I disagree that this situation won't lead to a bubble. While high demand for AI exists, historically, many tech bubbles have formed when companies heavily invest in unproven technologies without stable earnings. Additionally, market dynamics can shift, affecting demand unpredictably. How can you be sure that demand will sustain at the projected levels over the long term?
The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is a bubble that will crash, because there is no way that future demand will rise enough to result in these companies finally earning a profit.
I disagree that this is definitively a bubble set to crash. While current debt may outpace earnings, it’s important to consider that technological innovation often leads to unforeseen market opportunities. Additionally, many startups and tech companies have operated at a loss for years before finding profitable models. What makes you certain that these companies won't adapt or uncover new revenue streams?

Tuesday, October 21, 2025

The illusion of judgment in LLMs

An important open source article from Loru et al:

Significance

Large Language Models (LLMs) are used in evaluative tasks across domains. Yet, what appears as alignment with human or expert judgments may conceal a deeper shift in how “judgment” itself is operationalized. Using news outlets as a controlled benchmark, we compare six LLMs to expert ratings and human evaluations under an identical, structured framework. While models often match expert outputs, our results suggest that they may rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification. Our findings suggest not only performance asymmetries but also a shift in the heuristics underlying evaluative processes, raising fundamental questions about delegating judgment to LLMs.

Abstract

Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes.

 

 

Wednesday, September 24, 2025

Could humans and AI become a new evolutionary individual?

This PNAS opinion piece (open source) by Rainey and Hochberg is well worth a read.  It suggests that as different AI implementations suffuse into cultures, current competition between different cultures could evolve into accelerating competition between different culture - AI hybrids.  I pass on the introductory paragraphs to their arguments:

Artificial intelligence (AI)—broadly defined as the capacity of engineered systems to perform tasks that would require intelligence if done by humans—is increasingly embedded in the infrastructure of human life. From personalized recommendation systems to large-scale decision-making frameworks, AI shapes what humans see, choose, believe, and do (1, 2). Much of the current concern about AI centers on its understanding, safety, and alignment with human values (35). But alongside these immediate challenges lies a broader, more speculative, and potentially more profound question: could the deepening interdependence between humans and AI give rise to a new kind of evolutionary individual? We argue that as interdependencies grow, humans and AI could come to function not merely as interacting agents, but as an integrated evolutionary individual subject to selection at the collective level.

Lessons from the History of Life

The theory of major evolutionary transitions (METs) provides a framework for understanding this possibility (6). METs have punctuated the history of life. Those of particular relevance to our thesis here are those involving transitions in individuality. Such transitions are defined by events in which lower-level autonomous units—such as genes, cells, or organisms—become components of a higher-level individual subject to selection as a whole (7). Examples include the evolution of chromosomes from independent genes, multicellular organisms from single cells, and eusocial colonies from solitary ancestors (8). A particularly instructive case for our purposes is the eukaryotic cell, which arose from the integration of two ancient microbes—an archaeon and a eubacterium (9).*
 
To better understand the evolution of human–AI interactions, consider a globally embedded system: diffuse in structure, nonreplicating, yet unified in function. Such a system might emerge not by design alone, but via competitive, economic, or ecological pressures (10), selected for its capacity to integrate, persist, and coordinate across human networks. Initially shaped by human design and deployment, this AI could become embedded across societal and cultural infrastructures—advising, mediating, and responding. This would entail a persistent informational presence that learns from humanity and, in turn, comes to shape it. Importantly, even if yet incapable of autonomous replication, such an AI is likely to modify its own code in response to environmental feedback.

 

Monday, September 08, 2025

Rethinking how our brains work.

After reviewing Hoffman's mind-bending ideas that were the subject of the previous post, I decided to look back at another post on changing our perspective on how our minds work that was offered by Barrett and collaborators in The February 2023 Issue of Trends in Cognitive Science as an open source Opinions article. They suggest that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article, followed by their concluding remarks on  rethinking what a mind is and how a brain works.

THEN, I pass on  the result of ChatGPT5's scan of the literature for critical commentary on these ideas, with its summary of that commentary.  So, to start with Barrett and collaborators:

Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.

In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science: 

  The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary). 

  The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs. 

  The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category. 

 These three assumptions are rooted in a typological view of the mind, brain, and behavior that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.


 


Concluding remarks 

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

The above is a a slightly modified versions of MindBlog's 2/20/23 post.   

NOW, here is ChatGPT 5's take on subsequent critical discussion:

The critical commentary (what pushback says)

  1. “You’re arguing against yesterday’s straw man.”

    Several commentators agree we should move beyond simplistic blob-localization, but say the field already has. They note modern work uses individual-subject “precision fMRI,” causal patient evidence, and functional localizers—revealing selective, reproducible networks (e.g., for language) that are not captured by group-averages. Conclusion: reject naïve localization, yes—but specificity and dissociations remain central, so it’s not a neural “free-for-all.” 

  2. Robust domain selectivity challenges a blanket “whole-brain” claim.

    Large syntheses from Kanwisher/Fedorenko and colleagues argue that the high-level language network is a distinct, selectively engaged system, dissociable from other cognitive functions (math, music, code, ToM), stable within individuals, and identifiable with localizers. This is presented as counter-evidence to the idea that instances of complex cognition generally arise only as diffuse whole-brain ensembles. 

  3. Emotion decoding studies contest “no neural fingerprints.”

    Barrett’s broader constructionist stance emphasizes heterogeneity/degeneracy, but critics point to multivariate fMRI models that reliably distinguish discrete emotions across stimuli and time, implying category-informative neural patterns exist (even if distributed). This is often cited as a boundary condition on the “there are no consistent neural signatures” narrative. 

  4. Degeneracy is real—but can become unfalsifiable if over-generalized.

    Methodologists warn that invoking degeneracy everywhere risks eroding explanatory bite unless paired with causal constraints (lesions/TMS/intracranial) and testable predictions. Some reviews urge retaining levels of analysis and explicit ontologies so “everything depends on everything” doesn’t stall progress. (See wide-angle discussions of localization vs. anti-localization and philosophical overviews.) 

  5. Whole-brain metrics can be degenerate too.

    Ironically, connectivity/whole-brain measures are themselves degenerate representations—different underlying interactions can produce similar functional connectivity patterns—so simply shifting to whole-brain modeling doesn’t automatically solve mapping problems (and can worsen interpretability without rigorous controls). 

  6. Balanced takes from outside neuroscience proper echo this middle path.

    Commentary following the paper (e.g., Awais Aftab summarizing cross-disciplinary reactions) applauds retiring simplistic localization, while endorsing “differential involvement” and network-level specificity—arguing the right synthesis acknowledges both contextual, distributed computation and reproducible domain-selective systems. 


Bottom line

  • Broad agreement: ditch cartoon “one area = one function” and embrace context, networks, and many-to-one mappings.

  • Where critics draw the line: do not jettison functional specificity, dissociations, and causal constraints; acknowledge domains (like language) where selective networks are well-supported; keep degeneracy testable.

  • Net effect: The Barrett/Westlin piece sharpened an ongoing shift in the field; the strongest criticisms mainly concern over-reach (arguing against outdated practices) and generalization (treating “whole-brain + degeneracy” as universal rather than domain-dependent).