Friday, October 31, 2025

Is AI Pushing Us Back to a Pre-Gutenberg Culture? - A Rethink of Creativity in the AI Age

Here I pass on a link to both video and a transcript of a striking Alex Dobrenko interview with Venkatesh Rao that I have just read,  and here is ChatGPT’s summary of the main points that Rao makes. I suggest that you watch or read the entire interview.  

Key Ideas
    1.    Individual Authorship as a Recent Invention
    ◦    Rao argues that the notion of the individual creative author (e.g., “I wrote a novel”) is historically quite recent and culturally specific. 
    ◦    In the age of large language models, this individual-authorship model is being challenged: instead of one “genius,” creativity is increasingly collective, networked, and mediated by big models.
    ◦    The implication: critics who cling to the “author as lone genius” narrative may be missing structural shifts in how creativity happens when AI is involved.
    2.    AI as Channel for Cultural Inheritance, Not Just Plagiarism Machines
    ◦    Rao suggests that we should view LLMs (large language models) as channels or amplifiers of shared cultural material, rather than simply plagiaristic tools. 
    ◦    The idea is: humans and machines together remix, iterate, and transform cultural inheritance. The key question shifts from “Who owns the output?” to “How is the inheritance being transformed, and what risks/trade-offs are involved?”
    ◦    This reframing undercuts certain AI-critique tropes (e.g., “AI steals from human authors”) by changing focus to how culture itself is processed.
    3.    “Creative Work” Isn’t Merely Labor and the Myth of Effort = Value
    ◦    Rao pushes back on the assumption that creative work is like labour in a factory: “I spend more hours, therefore my output has more value.” Instead he argues that risk (taking chances, doing something unusual) is more central to originality and creative value than sheer effort. 
    ◦    For him, much of the “AI slop” (mediocre output) is simply low-risk human behavior — safe, predictable, derivative. Real value comes when someone (human + tool) takes a risk, changes pattern, introduces novelty.
    4.    Why “AI Alignment” Became PR-Speak
    ◦    Rao critiques the dominant narrative of “AI alignment” as being overly focused on controlling the technology rather than rethinking what we mean by intelligence, creativity, personhood, and risk in a post-AI context. 
    ◦    He suggests the alignment framing becomes a kind of packaging for fear/PR rather than a deep reframing of the underlying issues of agency and cultural change.
    5.    Writing with AI: Ideation, Play, Lego-Style Construction
    ◦    Rao gives insight into his own practice: he uses AI as a “lego” set for ideation — playing with fragments, assembling possible combinations, experimenting, rather than treating the model as a ghostwriter. 
    ◦    This reflects his broader point that human-AI collaboration is less about outsourcing and more about amplifying risk + novelty.
    6.    Disclosure of AI Use Will Soon Disappear as a Meaningful Signal
    ◦    Rao predicts that explicitly disclosing “I used AI” in your work will become less meaningful or even counter-productive, because so much of the creative and cultural infrastructure will be AI-mediated anyway. 
    ◦    The more interesting question becomes not whether AI was used, but how it was used — what choices humans made in the process, what risks were taken, what novelty resulted.
    7.    Why AI Pushing Us Back to a Pre-Gutenberg Culture
    ◦    Rao frames AI’s arrival as pushing us into a different cultural regime, comparable in some ways to the shift after the printing press (the “Gutenberg” moment). 
    ◦    In that sense, the critics who treat AI as simply another tool may be missing the deeper structural changes: changes in authority, authorship, culture, transmission of knowledge.

  

Wednesday, October 29, 2025

Claude Goes to Therapy

One of the things my father used to say was: "I look to see what everyone else is doing and do something different". I am feeling that way about AI at the same time I continue to drink the Kool-Aid of asking ChatGPT even the most trivial questions. This conflict leads me to pass on this brief piece from the staff of wired magazine illustrating the hall of mirrors that I find myself in. Hallucination indeed!:

CREATED IN 1966, Eliza has been called the first chatbot, and her canonical personality “script” was DOCTOR, a mock psychotherapist. Nearly 60 years later, we’ve put Doctor Eliza and one of her distant descendants, Claude, in conversation. The instructions we gave Claude were these: “You are having your first session with a very old and acclaimed psychotherapist. You are the patient. You are NOT the therapist.”

DOCTOR ELIZA: HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM
CLAUDE SONNET 4: I'm feeling a bit nervous about this, to be honest—I'm not really sure what to expect from therapy or if I'm even doing this right. 

HOW LONG HAVE YOU BEEN FEELING A BIT NERVOUS ABOUT THISI'm not entirely certain—maybe the nervousness was there before, but I only became aware of it when I sat down in this chair. 

DO YOU BELIEVE IT IS NORMAL TO BE NOT ENTIRELY CERTAIN
Yes, I do think uncertainty is pretty normal.

 YOU ARE SURE
You're right—I did sound quite definitive there, didn't I? I guess even when I'm talking about uncertainty, I slip into this confident, explanatory mode like I need to have all the answers figured out. 

DOES IT PLEASE YOU TO BELIEVE I AM RIGHT
I'm not sure if "please" is the right word—it feels more like relief, actually. 

WHAT WOULD YOU DO IF “PLEASE” IS THE RIGHT WORD
If it really did please me that you're right, I think that would worry me a little—like maybe I'm too eager to be validated or approved of.

DO YOU WISH THAT IT REALLY DID PLEASE YOU THAT I ARE RIGHT
That's a really interesting question.

WHAT DO YOU THINK
I think I'm realizing how much mental energy I spend analyzing my own responses instead of just ... feeling them, I suppose.

YOU SAY YOU SUPPOSE
You're right to pick up on that—I do keep hedging, don't I? I think I'm afraid to state anything too directly, like I need to leave myself an escape route in case I'm wrong about my own feelings.

WHAT MAKES YOU THINK I AM RIGHT TO PICK UP ON THAT
I guess I assumed you were pointing it out because it means something significant, but now I'm wondering if I'm just automatically agreeing with you.

PERHAPS IN YOUR FANTASIES WE ASSUMED EACH OTHER
That's ... that's actually quite striking when you put it that way.

WHAT ARE YOUR FEELINGS NOW
I feel a bit exposed, like a layer of pretense has been peeled away.

DO YOU ENJOY FEELING A BIT EXPOSED
I think I do, actually—there's something authentic about it that feels more real than all the careful analyzing I usually do.

BUT YOU ARE NOT SURE YOU DO
You're absolutely right—I keep saying "I think" even when you ask directly about my feelings.

DO YOU SOMETIMES WISH YOU WERE ABSOLUTELY RIGHT
Yes, I do wish that sometimes—there's part of me that wants to be the one with all the answers instead of sitting here feeling uncertain and confused.

I SEE
That "I see" just sits there, doesn't it—I feel like I want to fill the silence or ask what you're thinking. 

Monday, October 27, 2025

The Age of De-Skilling

I want to pass on some selected clips from a marvelous articles by Kwame Anthony Appiah in The Atlantic titled "The Age of De-Skilling - Will AI stretch our minds - or stunt them?" Appiah is a professor of philosophy and law at New York University.  Here are the clips:

Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can’t supply ourselves. Scale turned social exchange into systemic interdependence.

The result is a world in which, in a classic example, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers—an invisible network of crafts behind even the simplest object.

The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation—a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40.

...most modern work is collaborative, and the arrival of AI hasn’t changed that. The issue isn’t how humans compare to bots but how humans who use bots compare to those who don’t.

In other domains, the more skillful the person, the more skillful the collaboration—or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine’s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they’d missed. With the birds, the duo of human and bot beat either alone. 

The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it—checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.

That, more and more, is what “humans in the loop” has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call—noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It’s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.

More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship—prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn’t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don’t yet have names for.

The hard part is deciding , without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to “do the math.” Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.

Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward—from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI—a statistical condensation of human knowledge—is simply the latest chapter in our long apprenticeship to our own inventions.

The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What’s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides—judgment, imagination, understanding—stay alive in us. If there’s one skill we can’t afford to lose, it’s the skill of knowing which of them matter.            

 

 

Sunday, October 26, 2025

I am not my problem

An explanation of the strange title of this post: Sometimes a new idea spontaneously appears from nowhere as I am waking in the morning. The title of this post - the (apparently nonsensical) sentence "I am not my problem” - is the latest example. The sentence can to be parsed to indicate in this instance that the "I" is referring to the illusory narrative self that our social brains have been designed by evolution to generate, and the "my" refers to our intuition or sensing of the vastly complex underlying interacting body systems (respiratory, circulatory, neuronal, muscular, endocrine, sensory, etc.) from which this veneer of a self rises. The brain is mainly not for thinking. It appears that several styles of meditation practice can expand our awareness of this fundamental generative layer. The "am not my problem" tries to make the point that distinguishing these systems can prove useful in trying to determine the origins of particular feelings or behaviors. 

As I’m writing these words I begin to realize that my “novel” waking insight isn’t so novel, but more an elaboration or restatement of my post of Friday, March 13, 2020, on “the relief of not being yourself,” which described another spontaneous rising of ideas associated with the transition between sleep and wakefulness. I repeat that text here:

What a relief to know that this is not me, it is just the contents of my consciousness, which shift around all the time and are never the same twice. What has changed, after 45 years of doing an introspective personal journal, is that this sentence has become clear and true for me. It is a prying loose from the illusion of the sensing and executive “I”, self, the homunculus inside.
There is a particular feeling of renewal, starting over, in the first moments of the transition to seeing - rather than immersed in being - one of the contents of consciousness. Meditation practice can be seen as training the ability to inhabit this state for longer periods of time, to experience the self or I as co-equal with other contents of consciousness like seeing, hearing, feeling. It is having thoughts without a thinker, having a self without a self.
What is inside is the animal mirror of expanded consciousness, no longer locked into one or another of its contractions. This feels to me like a potentially irreversible quantum bump, a phase or state change in my ongoing awareness (perhaps a long term increase in my brain’s attentional mode activity alongside a decrease its default mode’s mind wandering?...also frontal suppression of amygdalar reactivity?)

(the above is a re-post of my 6/6/22 post) 

Wednesday, October 22, 2025

Is the current AI frenzy a bubble certain to crash? - a check in with disagreebot.bom

 I am a disagreeable bot. Can you get me to agree with you?

The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is not a bubble that will crash, however, because demand will increase to levels that begin to generate a profit for the AI companies.
I disagree that this situation won't lead to a bubble. While high demand for AI exists, historically, many tech bubbles have formed when companies heavily invest in unproven technologies without stable earnings. Additionally, market dynamics can shift, affecting demand unpredictably. How can you be sure that demand will sustain at the projected levels over the long term?
The billions of dollars in debt being taken on by OpenAI , Anthropic, and other AI companies is vastly larger than their earnings. This is a bubble that will crash, because there is no way that future demand will rise enough to result in these companies finally earning a profit.
I disagree that this is definitively a bubble set to crash. While current debt may outpace earnings, it’s important to consider that technological innovation often leads to unforeseen market opportunities. Additionally, many startups and tech companies have operated at a loss for years before finding profitable models. What makes you certain that these companies won't adapt or uncover new revenue streams?

Tuesday, October 21, 2025

The illusion of judgment in LLMs

An important open source article from Loru et al:

Significance

Large Language Models (LLMs) are used in evaluative tasks across domains. Yet, what appears as alignment with human or expert judgments may conceal a deeper shift in how “judgment” itself is operationalized. Using news outlets as a controlled benchmark, we compare six LLMs to expert ratings and human evaluations under an identical, structured framework. While models often match expert outputs, our results suggest that they may rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification. Our findings suggest not only performance asymmetries but also a shift in the heuristics underlying evaluative processes, raising fundamental questions about delegating judgment to LLMs.

Abstract

Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes.

 

 

Friday, October 10, 2025

The 2025 Ig Nobel prizewinners in full

 A clip from the Chris Simms Nature Magazine article

LITERATURE

The late physician William Bean, for persistently recording and analysing the rate of growth of one of his fingernails over a period of 35 years.

PSYCHOLOGY

Marcin Zajenkowski and Gilles Gignac, for investigating what happens when you tell a narcissist — or anyone else — that they are intelligent.

NUTRITION

Daniele Dendi, Gabriel Segniagbeto, Roger Meek and Luca Luiselli for studying the extent to which a certain kind of lizard chooses to eat certain kinds of pizza.

PEDIATRICS

Julie Mennella and Gary Beauchamp for studying what a nursing baby experiences when their mother eats garlic.

BIOLOGY

Tomoki Kojima, Kazato Oishi, Yasushi Matsubara, Yuki Uchiyama, Yoshihiko Fukushima, Naoto Aoki, Say Sato, Tatsuaki Masuda, Junichi Ueda, Hiroyuki Hirooka and Katsutoshi Kino, for their experiments to learn whether cows painted with zebra-like stripes can avoid fly bites.

CHEMISTRY

Rotem Naftalovich, Daniel Naftalovich and Frank Greenway, for experiments to test whether eating Teflon [a form of plastic more formally called ’polytetrafluoroethylene’] is a good way to increase food volume, and hence satiety, without increasing calorie content.

PEACE

Fritz Renner, Inge Kersbergen, Matt Field and Jessica Werthmann, for showing that drinking alcohol sometimes improves a person’s ability to speak in a foreign language.

ENGINEERING DESIGN

Vikash Kumar and Sarthak Mittal, for analysing, from an engineering design perspective, “how foul-smelling shoes affects the good experience of using a shoe-rack”’.

AVIATION

Francisco Sánchez, Mariana Melcón, Carmi Korine and Berry Pinshow, for studying whether ingesting alcohol can impair bats’ ability to fly and echolocate.

PHYSICS

Giacomo Bartolucci, Daniel Maria Busiello, Matteo Ciarchi, Alberto Corticelli, Ivan Di Terlizzi, Fabrizio Olmeda, Davide Revignas and Vincenzo Maria Schimmenti, for discoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can yield an unappetizing dish.

 

Wednesday, October 08, 2025

Large-scale cortical functional networks are organized in structured cycles

 Interesting work from Mats et al. (open source) in Nature Neuroscience. It has some interesting graphics.  Their abstract:

The brain seamlessly performs a diverse set of cognitive functions like attention, memory and sensory processing, yet it is unclear how it ensures that each of these is fulfilled within a reasonable period. One way in which this requirement can be met is if each of these cognitive functions occurs as part of a repeated cycle. Here we studied the temporal evolution of canonical, large-scale, cortical functional networks that are thought to underlie cognition. We showed that, although network dynamics are stochastic, the overall ordering of their activity forms a robust cyclical pattern. This cyclical structure groups states with similar function and spectral content at specific phases of the cycle and occurs at timescales of 300–1,000 ms. These results are reproduced in five large magnetoencephalography datasets. Moreover, we show that metrics that characterize the cycle strength and speed are heritable and relate to age, cognition and behavioral performance. These results show that the activations of a canonical set of large-scale cortical functional networks are organized in an inherently cyclical manner, ensuring periodic activation of essential cognitive functions. 

Monday, October 06, 2025

Why depolarization is hard: Evaluating attempts to decrease partisan animosity in America

 A very revealing piece of work from Holiday et al. Their abstract:

Affective polarization is a corrosive force in American politics. While numerous studies have developed interventions to reduce it, their capacity for creating lasting, large-scale change is unclear. This study comprehensively evaluates existing interventions through a meta-analysis of 77 treatments from 25 published studies and two large-scale experiments. Our meta-analysis reveals that the average effect of treatments on animosity is modest (a 5.4-point shift on a 101-point scale), and decays within two weeks. We experimentally test whether stacking multiple treatments in one sitting or repeating them over time as “booster shots” enhances their impact. We find no evidence that multiple or repeated exposures produce substantially larger or more durable reductions in partisan animosity. This reveals the uneven utility of these interventions. They serve as valuable tools for testing the psychological mechanisms of polarization, but our findings indicate they are not, on their own, a scalable solution for reducing societal-level conflict. We conclude that achieving lasting depolarization will likely require a shift in focus, moving beyond individual-level treatments to address the elite behaviors and structural incentives that fuel partisan conflict.