Monday, May 19, 2025

AI is not your friend.

I want to pass on clips from Mike Caulfield's piece in The Atlantic on how "opinionated" chatbots destroy AI's potential, and how this can be fixed:

Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”
Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions.
But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.
The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.
Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.
OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.
...the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature.
I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.
There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.
For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.
The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.

Friday, May 16, 2025

On replacing the American establishment - the ideological battle for the soul of Trump World.

 I want to pass on the first few paragraphs from Chaffin and Elinson's  piece in the May 10 Wall Street Journal, which give a juicy summary of warring camps in MAGA world:

When President Trump announced last month that he would upend decades of American trade policy by imposing massive tariffs even on longtime allies, he aroused the competing spirits of his closest advisers. Elon Musk, the world’s richest man, was all too aware of the disruption tariffs would pose to his electric vehicle company, Tesla, with factories and suppliers around the world. He blasted Trump’s trade adviser, Peter Navarro, as “a moron” and “dumber than a sack of bricks.”
Vice President JD Vance, on the other hand, is an ardent defender of a trade policy that Trump insists will restore industrial jobs to the Rust Belt, including Vance’s home state of Ohio. “What has the globalist economy gotten the United States of America?” he asked on Fox News last month.
“We borrow money from Chinese peasants to buy the things those Chinese peasants manufacture. That is not a recipe for economic prosperity.”
Within that clash were strains of two radical and conflicting philosophies that have animated Trump’s first 100 days. On one side are tech bros racing to create a new future; on the other, a resurgent band of conservative Catholics who yearn for an imagined past. Both groups agree that the status quo has failed America and must be torn down to make way for a new “postliberal” world. This conviction explains much of the revolutionary fervor of Trump’s second term, especially the aggressive bludgeoning of elite universities and the federal workforce.
But the two camps disagree sharply on why liberalism should be junked and what should replace it. The techies envision a libertarian world in which great men like Musk can build a utopian future unfettered by government bureaucrats and regulation. Their dark prince is Curtis Yarvin, a blogger-philosopher who has called for American democracy to be replaced by a king who would run the nation like a tech CEO.
The conservative Catholics, in contrast, want to return America to a bygone era. They venerate local communities, small producers and those who work with their hands. This “common good” conservatism, as they call it, is bound together by tradition and religious morality. Unlike Musk, with his many baby mamas and his zeal to colonize Mars, they believe in limits and personal restraint.

 

 

 

Wednesday, May 14, 2025

Our human consciousness is a 'Controlled Hallucination' and AI can never achieve it.

I want to suggest that readers have a look at an engaging popular article by Darren Orf that summarizes the ideas of Anil Seth. Seth is a neuroscientist at the University of Sussex whose writing was on of the sources I used in preparing my most recent lecture, New Perspectives on how our Minds Work.  On the 'singularity' or point at which the intelligence of artificial minds might surpass that of human minds, Seth makes the simple point that intelligence is not the same thing as consciousness, which depends on our biological bodies (something AI simply doesn't have)  - bodies that use a bunch of controlled hallucinations to run our show. 

Monday, May 12, 2025

How ketamine breaks through anhedonia - reigniting desire

When chronic depression has not been relieved by behavioral therapies such as meditation or cognitive therapy ketamine is sometimes found to provide relief. Lucan at all probe brain changes in mice given a single expose to ketamine that rescues then from chronic stress inducted anhedonia.  Here is their summary of the paper:

Ketamine is recognized as a rapid and sustained antidepressant, particularly for major depression unresponsive to conventional treatments. Anhedonia is a common symptom of depression for which ketamine is highly efficacious, but the underlying circuits and synaptic changes are not well understood. Here, we show that the nucleus accumbens (NAc) is essential for ketamine’s effect in rescuing anhedonia in mice subjected to chronic stress. Specifically, a single exposure to ketamine rescues stress-induced decreased strength of excitatory synapses on NAc-D1 dopamine receptor-expressing medium spiny neurons (D1-MSNs). Using a cell-specific pharmacology method, we establish the necessity of this synaptic restoration for the sustained therapeutic effects of ketamine on anhedonia. Examining causal sufficiency, artificially increasing excitatory synaptic strength onto D1-MSNs recapitulates the behavioral amelioration induced by ketamine. Finally, we used opto- and chemogenetic approaches to determine the presynaptic origin of the relevant synapses, implicating monosynaptic inputs from the medial prefrontal cortex and ventral hippocampus.