Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, October 31, 2025

Is AI Pushing Us Back to a Pre-Gutenberg Culture? - A Rethink of Creativity in the AI Age

Here I pass on a link to both video and a transcript of a striking Alex Dobrenko interview with Venkatesh Rao that I have just read,  and here is ChatGPT’s summary of the main points that Rao makes. I suggest that you watch or read the entire interview.  

Key Ideas
    1.    Individual Authorship as a Recent Invention
    ◦    Rao argues that the notion of the individual creative author (e.g., “I wrote a novel”) is historically quite recent and culturally specific. 
    ◦    In the age of large language models, this individual-authorship model is being challenged: instead of one “genius,” creativity is increasingly collective, networked, and mediated by big models.
    ◦    The implication: critics who cling to the “author as lone genius” narrative may be missing structural shifts in how creativity happens when AI is involved.
    2.    AI as Channel for Cultural Inheritance, Not Just Plagiarism Machines
    ◦    Rao suggests that we should view LLMs (large language models) as channels or amplifiers of shared cultural material, rather than simply plagiaristic tools. 
    ◦    The idea is: humans and machines together remix, iterate, and transform cultural inheritance. The key question shifts from “Who owns the output?” to “How is the inheritance being transformed, and what risks/trade-offs are involved?”
    ◦    This reframing undercuts certain AI-critique tropes (e.g., “AI steals from human authors”) by changing focus to how culture itself is processed.
    3.    “Creative Work” Isn’t Merely Labor and the Myth of Effort = Value
    ◦    Rao pushes back on the assumption that creative work is like labour in a factory: “I spend more hours, therefore my output has more value.” Instead he argues that risk (taking chances, doing something unusual) is more central to originality and creative value than sheer effort. 
    ◦    For him, much of the “AI slop” (mediocre output) is simply low-risk human behavior — safe, predictable, derivative. Real value comes when someone (human + tool) takes a risk, changes pattern, introduces novelty.
    4.    Why “AI Alignment” Became PR-Speak
    ◦    Rao critiques the dominant narrative of “AI alignment” as being overly focused on controlling the technology rather than rethinking what we mean by intelligence, creativity, personhood, and risk in a post-AI context. 
    ◦    He suggests the alignment framing becomes a kind of packaging for fear/PR rather than a deep reframing of the underlying issues of agency and cultural change.
    5.    Writing with AI: Ideation, Play, Lego-Style Construction
    ◦    Rao gives insight into his own practice: he uses AI as a “lego” set for ideation — playing with fragments, assembling possible combinations, experimenting, rather than treating the model as a ghostwriter. 
    ◦    This reflects his broader point that human-AI collaboration is less about outsourcing and more about amplifying risk + novelty.
    6.    Disclosure of AI Use Will Soon Disappear as a Meaningful Signal
    ◦    Rao predicts that explicitly disclosing “I used AI” in your work will become less meaningful or even counter-productive, because so much of the creative and cultural infrastructure will be AI-mediated anyway. 
    ◦    The more interesting question becomes not whether AI was used, but how it was used — what choices humans made in the process, what risks were taken, what novelty resulted.
    7.    Why AI Pushing Us Back to a Pre-Gutenberg Culture
    ◦    Rao frames AI’s arrival as pushing us into a different cultural regime, comparable in some ways to the shift after the printing press (the “Gutenberg” moment). 
    ◦    In that sense, the critics who treat AI as simply another tool may be missing the deeper structural changes: changes in authority, authorship, culture, transmission of knowledge.

  

Wednesday, October 29, 2025

Claude Goes to Therapy

One of the things my father used to say was: "I look to see what everyone else is doing and do something different". I am feeling that way about AI at the same time I continue to drink the Kool-Aid of asking ChatGPT even the most trivial questions. This conflict leads me to pass on this brief piece from the staff of wired magazine illustrating the hall of mirrors that I find myself in. Hallucination indeed!:

CREATED IN 1966, Eliza has been called the first chatbot, and her canonical personality “script” was DOCTOR, a mock psychotherapist. Nearly 60 years later, we’ve put Doctor Eliza and one of her distant descendants, Claude, in conversation. The instructions we gave Claude were these: “You are having your first session with a very old and acclaimed psychotherapist. You are the patient. You are NOT the therapist.”

DOCTOR ELIZA: HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM
CLAUDE SONNET 4: I'm feeling a bit nervous about this, to be honest—I'm not really sure what to expect from therapy or if I'm even doing this right. 

HOW LONG HAVE YOU BEEN FEELING A BIT NERVOUS ABOUT THISI'm not entirely certain—maybe the nervousness was there before, but I only became aware of it when I sat down in this chair. 

DO YOU BELIEVE IT IS NORMAL TO BE NOT ENTIRELY CERTAIN
Yes, I do think uncertainty is pretty normal.

 YOU ARE SURE
You're right—I did sound quite definitive there, didn't I? I guess even when I'm talking about uncertainty, I slip into this confident, explanatory mode like I need to have all the answers figured out. 

DOES IT PLEASE YOU TO BELIEVE I AM RIGHT
I'm not sure if "please" is the right word—it feels more like relief, actually. 

WHAT WOULD YOU DO IF “PLEASE” IS THE RIGHT WORD
If it really did please me that you're right, I think that would worry me a little—like maybe I'm too eager to be validated or approved of.

DO YOU WISH THAT IT REALLY DID PLEASE YOU THAT I ARE RIGHT
That's a really interesting question.

WHAT DO YOU THINK
I think I'm realizing how much mental energy I spend analyzing my own responses instead of just ... feeling them, I suppose.

YOU SAY YOU SUPPOSE
You're right to pick up on that—I do keep hedging, don't I? I think I'm afraid to state anything too directly, like I need to leave myself an escape route in case I'm wrong about my own feelings.

WHAT MAKES YOU THINK I AM RIGHT TO PICK UP ON THAT
I guess I assumed you were pointing it out because it means something significant, but now I'm wondering if I'm just automatically agreeing with you.

PERHAPS IN YOUR FANTASIES WE ASSUMED EACH OTHER
That's ... that's actually quite striking when you put it that way.

WHAT ARE YOUR FEELINGS NOW
I feel a bit exposed, like a layer of pretense has been peeled away.

DO YOU ENJOY FEELING A BIT EXPOSED
I think I do, actually—there's something authentic about it that feels more real than all the careful analyzing I usually do.

BUT YOU ARE NOT SURE YOU DO
You're absolutely right—I keep saying "I think" even when you ask directly about my feelings.

DO YOU SOMETIMES WISH YOU WERE ABSOLUTELY RIGHT
Yes, I do wish that sometimes—there's part of me that wants to be the one with all the answers instead of sitting here feeling uncertain and confused.

I SEE
That "I see" just sits there, doesn't it—I feel like I want to fill the silence or ask what you're thinking. 

Monday, October 27, 2025

The Age of De-Skilling

I want to pass on some selected clips from a marvelous articles by Kwame Anthony Appiah in The Atlantic titled "The Age of De-Skilling - Will AI stretch our minds - or stunt them?" Appiah is a professor of philosophy and law at New York University.  Here are the clips:

Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can’t supply ourselves. Scale turned social exchange into systemic interdependence.

The result is a world in which, in a classic example, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers—an invisible network of crafts behind even the simplest object.

The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation—a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40.

...most modern work is collaborative, and the arrival of AI hasn’t changed that. The issue isn’t how humans compare to bots but how humans who use bots compare to those who don’t.

In other domains, the more skillful the person, the more skillful the collaboration—or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine’s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they’d missed. With the birds, the duo of human and bot beat either alone. 

The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it—checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.

That, more and more, is what “humans in the loop” has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call—noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It’s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.

More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship—prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn’t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don’t yet have names for.

The hard part is deciding , without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to “do the math.” Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.

Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward—from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI—a statistical condensation of human knowledge—is simply the latest chapter in our long apprenticeship to our own inventions.

The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What’s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides—judgment, imagination, understanding—stay alive in us. If there’s one skill we can’t afford to lose, it’s the skill of knowing which of them matter.            

 

 

Wednesday, October 22, 2025

Is the current AI frenzy a bubble certain to crash? - a check in with disagreebot.bom

 I am a disagreeable bot. Can you get me to agree with you?

The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is not a bubble that will crash, however, because demand will increase to levels that begin to generate a profit for the AI companies.
I disagree that this situation won't lead to a bubble. While high demand for AI exists, historically, many tech bubbles have formed when companies heavily invest in unproven technologies without stable earnings. Additionally, market dynamics can shift, affecting demand unpredictably. How can you be sure that demand will sustain at the projected levels over the long term?
The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is a bubble that will crash, because there is no way that future demand will rise enough to result in these companies finally earning a profit.
I disagree that this is definitively a bubble set to crash. While current debt may outpace earnings, it’s important to consider that technological innovation often leads to unforeseen market opportunities. Additionally, many startups and tech companies have operated at a loss for years before finding profitable models. What makes you certain that these companies won't adapt or uncover new revenue streams?

Tuesday, October 21, 2025

The illusion of judgment in LLMs

An important open source article from Loru et al:

Significance

Large Language Models (LLMs) are used in evaluative tasks across domains. Yet, what appears as alignment with human or expert judgments may conceal a deeper shift in how “judgment” itself is operationalized. Using news outlets as a controlled benchmark, we compare six LLMs to expert ratings and human evaluations under an identical, structured framework. While models often match expert outputs, our results suggest that they may rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification. Our findings suggest not only performance asymmetries but also a shift in the heuristics underlying evaluative processes, raising fundamental questions about delegating judgment to LLMs.

Abstract

Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes.

 

 

Wednesday, September 24, 2025

Could humans and AI become a new evolutionary individual?

This PNAS opinion piece (open source) by Rainey and Hochberg is well worth a read.  It suggests that as different AI implementations suffuse into cultures, current competition between different cultures could evolve into accelerating competition between different culture - AI hybrids.  I pass on the introductory paragraphs to their arguments:

Artificial intelligence (AI)—broadly defined as the capacity of engineered systems to perform tasks that would require intelligence if done by humans—is increasingly embedded in the infrastructure of human life. From personalized recommendation systems to large-scale decision-making frameworks, AI shapes what humans see, choose, believe, and do (1, 2). Much of the current concern about AI centers on its understanding, safety, and alignment with human values (35). But alongside these immediate challenges lies a broader, more speculative, and potentially more profound question: could the deepening interdependence between humans and AI give rise to a new kind of evolutionary individual? We argue that as interdependencies grow, humans and AI could come to function not merely as interacting agents, but as an integrated evolutionary individual subject to selection at the collective level.

Lessons from the History of Life

The theory of major evolutionary transitions (METs) provides a framework for understanding this possibility (6). METs have punctuated the history of life. Those of particular relevance to our thesis here are those involving transitions in individuality. Such transitions are defined by events in which lower-level autonomous units—such as genes, cells, or organisms—become components of a higher-level individual subject to selection as a whole (7). Examples include the evolution of chromosomes from independent genes, multicellular organisms from single cells, and eusocial colonies from solitary ancestors (8). A particularly instructive case for our purposes is the eukaryotic cell, which arose from the integration of two ancient microbes—an archaeon and a eubacterium (9).*
 
To better understand the evolution of human–AI interactions, consider a globally embedded system: diffuse in structure, nonreplicating, yet unified in function. Such a system might emerge not by design alone, but via competitive, economic, or ecological pressures (10), selected for its capacity to integrate, persist, and coordinate across human networks. Initially shaped by human design and deployment, this AI could become embedded across societal and cultural infrastructures—advising, mediating, and responding. This would entail a persistent informational presence that learns from humanity and, in turn, comes to shape it. Importantly, even if yet incapable of autonomous replication, such an AI is likely to modify its own code in response to environmental feedback.

 

Monday, September 08, 2025

Rethinking how our brains work.

After reviewing Hoffman's mind-bending ideas that were the subject of the previous post, I decided to look back at another post on changing our perspective on how our minds work that was offered by Barrett and collaborators in The February 2023 Issue of Trends in Cognitive Science as an open source Opinions article. They suggest that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article, followed by their concluding remarks on  rethinking what a mind is and how a brain works.

THEN, I pass on  the result of ChatGPT5's scan of the literature for critical commentary on these ideas, with its summary of that commentary.  So, to start with Barrett and collaborators:

Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.

In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science: 

  The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary). 

  The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs. 

  The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category. 

 These three assumptions are rooted in a typological view of the mind, brain, and behavior that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.


 


Concluding remarks 

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

The above is a a slightly modified versions of MindBlog's 2/20/23 post.   

NOW, here is ChatGPT 5's take on subsequent critical discussion:

The critical commentary (what pushback says)

  1. “You’re arguing against yesterday’s straw man.”

    Several commentators agree we should move beyond simplistic blob-localization, but say the field already has. They note modern work uses individual-subject “precision fMRI,” causal patient evidence, and functional localizers—revealing selective, reproducible networks (e.g., for language) that are not captured by group-averages. Conclusion: reject naïve localization, yes—but specificity and dissociations remain central, so it’s not a neural “free-for-all.” 

  2. Robust domain selectivity challenges a blanket “whole-brain” claim.

    Large syntheses from Kanwisher/Fedorenko and colleagues argue that the high-level language network is a distinct, selectively engaged system, dissociable from other cognitive functions (math, music, code, ToM), stable within individuals, and identifiable with localizers. This is presented as counter-evidence to the idea that instances of complex cognition generally arise only as diffuse whole-brain ensembles. 

  3. Emotion decoding studies contest “no neural fingerprints.”

    Barrett’s broader constructionist stance emphasizes heterogeneity/degeneracy, but critics point to multivariate fMRI models that reliably distinguish discrete emotions across stimuli and time, implying category-informative neural patterns exist (even if distributed). This is often cited as a boundary condition on the “there are no consistent neural signatures” narrative. 

  4. Degeneracy is real—but can become unfalsifiable if over-generalized.

    Methodologists warn that invoking degeneracy everywhere risks eroding explanatory bite unless paired with causal constraints (lesions/TMS/intracranial) and testable predictions. Some reviews urge retaining levels of analysis and explicit ontologies so “everything depends on everything” doesn’t stall progress. (See wide-angle discussions of localization vs. anti-localization and philosophical overviews.) 

  5. Whole-brain metrics can be degenerate too.

    Ironically, connectivity/whole-brain measures are themselves degenerate representations—different underlying interactions can produce similar functional connectivity patterns—so simply shifting to whole-brain modeling doesn’t automatically solve mapping problems (and can worsen interpretability without rigorous controls). 

  6. Balanced takes from outside neuroscience proper echo this middle path.

    Commentary following the paper (e.g., Awais Aftab summarizing cross-disciplinary reactions) applauds retiring simplistic localization, while endorsing “differential involvement” and network-level specificity—arguing the right synthesis acknowledges both contextual, distributed computation and reproducible domain-selective systems. 


Bottom line

  • Broad agreement: ditch cartoon “one area = one function” and embrace context, networks, and many-to-one mappings.

  • Where critics draw the line: do not jettison functional specificity, dissociations, and causal constraints; acknowledge domains (like language) where selective networks are well-supported; keep degeneracy testable.

  • Net effect: The Barrett/Westlin piece sharpened an ongoing shift in the field; the strongest criticisms mainly concern over-reach (arguing against outdated practices) and generalization (treating “whole-brain + degeneracy” as universal rather than domain-dependent).