Showing posts with label culture/future. Show all posts
Showing posts with label culture/future. Show all posts

Tuesday, March 31, 2026

AI use can compromise our serendipity, creativity, autonomy, and sense of agency.

I have been reading numerous articles on pitfalls of using AI, and want to point to two in particular that I highly recommend for a slow and careful read.  

The Substack piece by Colin Lewis is titled "AI Is A Medium And It Will Change Us" - Lessons from AI Labs on the Slow Erosion of Human Autonomy.  From the article:

We are in real danger of losing ourselves through AI usage. Researchers at Google DeepMind have confirmed, under certain conditions, an LLM “is able to induce belief and behaviour change.” And researchers at Anthropic have identified a rising pattern of “situational disempowerment,” where AI interactions lead users to “form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values.”

Researchers at Anthropic conducted a massive, privacy-preserving audit of 1.5 million real-world conversations to answer a question that has long hovered over the industry: what happens to the human mind after months of using an AI assistant? Their findings, published in “Who’s in Charge? Behavioral and Psychological Impacts of AI Advice Dependence and Authority”, suggest a quiet but profound erosion of autonomy, where users increasingly outsource the “soft tissues” of judgment, asking the machine to script their most intimate apologies, validate their personal grievances, and even settle their moral dilemmas.

“Taken to an extreme, if humans make inauthentic value judgments and take inauthentic actions, they might be reduced to 'substrates' through which AI lives, which itself is a form of existential risk that Temple (2024) termed ‘the death of our humanity.’”

At the same time, a team at Google DeepMind was probing a different side of this same coin. In their study, “Evaluating Language Models for Harmful Manipulation,” they demonstrated that these systems can be steered to bypass rational scrutiny entirely, exploiting human biases to shift beliefs and behaviors across finance, health, and public policy. Together, these papers signal a shift in the AI risk landscape: the primary risk is no longer just a technical failure of the machine, but a psychological surrender by the human.

I believe the real danger is not that machines will start thinking like us, but that we will become accustomed to letting them think for us in the moments that matter. Not just work. Not just homework, customer service, search, or code. I mean the more intimate territory: what to say to a grieving sibling, whether to leave a partner, how to read a political event, when to trust one’s own instinct, when to override it, when to feel wronged, when to feel absolved. A civilization can survive many stupid tools. What it does not survive so easily is the gradual evacuation of judgment from the people who must still live with the consequences of action.

The piece by Ezra Klein is titled "I Saw Something New in San Francisco."  A clip from the article:

My experience of Anthropic’s Claude in recent months is that I’ll drop in a stub of a thought and immediately receive paragraphs of often elegant writing turning that intuition into something that looks, superficially, like a fully realized idea. It’s my impulse, but it has been recast and extended into something far more coherent. With each passing month, I have to expend more energy to recognize whether it’s fundamentally wrong or hollow.

I’ve been an editor for 15 years now. Recognizing a bad idea beneath good writing — even in myself — is part of my job. But what would it mean to grow up with that kind of companion? What would it mean to have your every adolescent intuition turned into persuasive prose? What is lost in not having to do the work to build out our intuitions ourselves?

Researchers have drawn a distinction between “cognitive offloading” and “cognitive surrender.” Cognitive offloading comes when you shift a discrete task over to a tool like a calculator; cognitive surrender comes when, as Steven Shaw and Gideon Nave of the University of Pennsylvania put it, “the user relinquishes cognitive control and adopts the A.I.’s judgment as their own.” In practice, I wonder whether this distinction is so clean: My use of calculators has surely atrophied my math skills, as my use of mapping services has allowed my (already poor) sense of direction to diminish further.

But cognitive surrender is clearly real, and with it will come the atrophy of certain skills and capacities, or the absence of their development in the first place. The work I am doing now, struggling through yet another draft of this essay, is the work that deepens my thinking for later.

In a thoughtful piece, the technology writer Azeem Azhar describes his efforts to safeguard “the space where ideas arrive before they’re shaped.” But how many of us will put in such careful, reflective effort to protect our most generative spaces of thought? How many people even know which spaces should be protected? For me, the arrival of an idea is less generative than the work that goes into chiseling that idea into something publishable. This whole essay began as a vague thought about A.I. and McLuhan. If I have gained anything in this process, it has been in the toil that followed inspiration.

The other thing I notice the A.I. doing is constantly referring back to other things it knows, or thinks it knows, about me. Sycophancy, in my experience, has given way to an occasionally unsettling attentiveness; a constant drawing of connections between my current concerns and my past queries, like a therapist desperate to prove he’s been paying close attention.

The result is a strange amalgam of feeling seen and feeling caricatured. Ideas I might otherwise have dropped keep getting reanimated; personal struggles I might otherwise move on from keep returning unexpectedly to my screen. I am occasionally startled by the recognition of a pattern I hadn’t noticed; I am often irked by the recitation of a thought I’m no longer interested in. The effect is to constantly reinforce a certain version of myself. My self is quite settled, but what if it wasn’t?

 

 

 

 

Monday, February 23, 2026

The geometries of change and the value of being human

I pass on and also archive for myself the following three ChatGPT 5.2 summaries of three recent Johar essays:

Summary of The Geometries of Change by Indy Johar

Core premise
Johar argues that every system of organisation—institutions, economies, governance—rests on an underlying “geometry,” meaning a structural logic that determines how change can occur, what is adjustable, and when transformation becomes disruptive rather than gradual. Geometry defines governability: what can evolve smoothly versus what requires rupture.

Linear geometry and its limits
Modern institutions are built around a linear model of change:

  • A direction or goal is fixed first.

  • Structures (roles, rules, incentives, infrastructure) are then aligned to that direction.

  • Ongoing governance focuses mainly on speed and efficiency rather than revising direction.

Over time, this produces heavy path dependence. Investments, regulations, identities, and incentives lock systems onto a trajectory, making course correction costly and rare. When change finally occurs, it often comes through crisis, collapse, or replacement rather than continuous adaptation. Linear systems work in stable environments but become brittle under uncertainty and complexity.

The problem of contemporary conditions
Johar contends that the assumptions supporting linear organising—predictable futures, centralized authority, singular legitimacy—no longer hold. Today’s environment is marked by plural values, deep uncertainty, and systemic risks. Under these conditions, linear models accumulate commitments faster than they build adaptive capacity, narrowing the range of viable futures.

Helical geometry as an alternative
The essay proposes a “helical” model of change—spiraling through time rather than progressing in a straight line. In this geometry:

  • Direction is not permanently fixed; it can be periodically re-negotiated.

  • Institutional structures remain adjustable rather than locked to one trajectory.

  • Change occurs through iterative cycles that preserve continuity while enabling reorientation.

The aim is to keep the future reachable: systems must allow for turning, not just acceleration. Helical organising supports learning, plural legitimacy, and ongoing adaptation instead of forcing transformation to occur through rupture.

Overall argument
Johar’s central claim is that the key question is not simply what actions to take, but what geometry of organising makes adaptive transformation possible. Linear models prioritize efficiency and stability but generate fragility in volatile contexts. A helical geometry—cyclical, revisable, and temporally layered—offers a framework for steering collective systems amid uncertainty without requiring breakdown as the mechanism of change.

***************** 

Here is a structured summary of The Future of Being Human, Quietly Being Defined? (Indy Johar, February 22 2026) based on the full essay:

1. Trigger and framing
The essay begins with a reference to Sam Altman’s remark about how much energy and time it takes to train a human compared with an AI model. Johar says the comment is superficially about energy fairness but structurally shifts the frame toward what counts as the unit of comparison in evaluating humans and machines.

2. Commensurability as a hinge
Johar distinguishes two kinds of “commensurability”:

  • Descriptive, which measures energy and inputs across systems;

  • Normative, which uses those measurements to justify comparisons and trade-offs.
    Altman’s claim, if read normatively, encourages interpreting humans and AI as functionally comparable capability systems. That framing quietly turns human beings into units of capability production.

3. Reduction of humans to capability outputs
Once humans are legible mainly in terms of cognitive capability as service output, several outcomes follow:

  • Humans are considered substitutable if non-human systems can deliver similar outputs.

  • Human value is recast in optimization terms: cost, throughput, reliability.

  • Institutions begin organizing around procurement and compliance rather than intrinsic human worth.
    Johar calls this capability reductionism: a more refined but still reductive continuation of industrial labour reductionism that flattened humans into units of labour.

4. Compute-centric reference frames
If training becomes the shared frame, computing infrastructure becomes the reference class for intelligence and governance:

  • Human education becomes “fine-tuning.”

  • Civility and culture are reframed as priors in a cognitive pipeline.
    This shift influences what is measurable, fundable, normative, and thus shapes policies, welfare, schooling, and citizenship around capability output.

5. Structural fork in governance
Johar outlines two divergent models of governance that emerge from this framing:

  1. Capability-first governance, where comparability and optimisation are central under constraint;

  2. Intrinsic-life governance, where human dignity and irreducibility are first-order, non-tradeable commitments.
    He argues that if capability becomes the default grammar of society, human redundancy can become administratively rational without ever being declared explicitly.

6. Hierarchy of values
The essay proposes a normative ordering: rights first, capability second. Johar says that doesn’t mean rejecting metrics, but keeping them bounded within a framework that protects intrinsic human worth rather than letting efficiency metrics displace rights as constraints.

7. Core concern
The deeper issue isn’t whether training humans takes energy—it’s that if civilisation adopts a grammar defining humans primarily through capability and contribution, then optimising and replacing them becomes a rational endpoint. That is not just a labor-market calculation; it reshapes what it means to be human in governance and valuation systems.

Overall thesis
Johar’s essay warns that the emerging default comparison between humans and machine capabilities is not neutral. It quietly reshapes governance logic, reduces humans to tradable capability vectors, and opens a path where humans become redundant in an optimisation-driven system unless society explicitly protects intrinsic rights and dignity before metrics.

***********************

Here is a structured summary of The Value of Being Human by Indy Johar (Feb 22, 2026):

1. Core philosophical choice
Johar identifies a foundational question beneath debates about AI, labour, and productivity: whether we conceive of humans as fixed bundles of capabilities or as open, developmental systems. This ontological framing — closed versus open — determines how value is understood and how institutions and policies are designed.

2. Closed ontology: humans as defined capability sets
In the dominant contemporary frame, humans are treated as collections of measurable functions (reasoning, creativity, coordination, etc.). Once human capacities are specified and benchmarked, comparison with machines becomes straightforward, and substitution decisions appear rational and objective. This reinforces a logic where humans are valued only for defined, quantifiable contributions.

3. Open ontology: humans as evolving trajectories
Johar contrasts this with the idea that humans are not static but evolving. Throughout history, major technological shifts (writing, printing, industrialisation, digital networks) have reshaped human cognition, behaviour, and capacities. Under transformative technologies like AI, future human capacities may emerge in ways that cannot be entirely predicted or pre-specified.

4. Dangers of governance by measurement
Measuring performance is not inherently flawed; the issue arises when measurable metrics become the primary basis for governance, allocation, and institutional incentives. When metrics become targets, systems reorganise around them, and what is measurable becomes what is rewarded. This exerts “selection pressure” that narrows the space of human development to what is legible and comparable.

5. Developmental compression and its risks
Treating humans as static inventories of capability risks “developmental compression,” where alternative developmental trajectories are under-supported or foreclosed entirely. Institutions optimising for present metrics may inadvertently narrow the range of future human capacities and forms of becoming.

6. Value of the unknown
Johar emphasises that unknown future capacities carry structural value. In contexts of deep uncertainty, preserving human developmental possibility (optionality) is a prudential imperative. Static valuation frameworks that assume completeness risk mispricing long-term potential.

7. AI’s role as selection pressure
AI itself does not dictate whether human capacities decline or expand; instead it introduces a selection pressure. Its effect on human development depends on the institutional frameworks in which it is embedded. AI can either amplify human development or compress it into narrow optimisation around measurable tasks.

8. Closed vs. open ontology: institutional implications

  • Closed ontology: humans are defined, measurable, and replaceable; institutions orient toward substitution and optimisation.

  • Open ontology: humans are emergent and partially unknowable; institutions should prioritise preserving developmental possibility over optimisation.

9. Central question re-framed
The key issue is not whether humans outperform machines at specific tasks, but whether we treat human nature as still emergent and indeterminate. Acceptance of a closed ontology leads logically to substitution and optimisation; acceptance of an open ontology implies designing systems that safeguard the conditions under which new human capacities can emerge in the future.

Summary thesis
Johar’s argument reframes the “value of being human” not in terms of current comparative performance with machines, but in terms of preserving human developmental potential. He warns that collapsing humans into static capability sets for measurement and optimisation risks narrowing the future of human becoming and misvaluing what is uncertain but potentially crucial.

 

 

 

Wednesday, February 18, 2026

New Ferality - Seeking new ways of being wild in new nature

This post is to archive this link to a recent Venkatesh Rao essay, and also pass on condensations of its main ideas done by Google Gemini and ChatGPT 5.2.  (I could wonder where the extraordinary humans who will be able to perform Rao's 'new ways of being' are to be found - who will be capable of new behaviors incompatible and in conflict with our evolved nature, our desire for hierarchy, definition of roles, etc.)

From Google Gemini: 

In "New Ferality," Venkatesh Rao explores the emergence of a "New Nature"—a technological landscape that has become as complex, unpredictable, and indifferent to human intentions as the biological wilderness once was. The core idea is that as our digital and robotic systems reach a certain level of autonomy and opacity, they cease to be "tools" under our control and instead become a new kind of environment. In this context, the traditional concept of "wildness" (the untouched natural world) is replaced by "ferality": the state of human-made systems that have escaped domesticity and are now evolving on their own terms.

The essay argues that to navigate this environment, we must develop a "New Ferality"—a way of being that mirrors the behavior of animals in a landscape. Rather than attempting to "solve" or "manage" the complexity of modern tech stacks and AI through top-down engineering, Rao suggests we must learn to inhabit them through instinct, scavenging, and adaptation. This involves a shift from being "users" who expect predictable interfaces to being "feral" actors who treat technology as a semi-wild habitat, finding "liveness" and opportunity within the gaps and glitches of the system.

Ultimately, Rao posits that the "New Nature" is not something to be feared or conquered, but a reality to be lived in. The challenge of the coming era is a psychological and behavioral transition: moving away from the "domesticated" mindset of the industrial age—which relied on legible, orderly systems—toward a more resilient, opportunistic, and "wild" engagement with a world that is no longer fully legible to its creators.

From ChatGPT:

The essay argues that “ferality” is not about returning to primitive wilderness but about learning to live effectively inside a technologically saturated “new nature” whose rules are engineered, persistent, and hard to escape. Modern systems—platforms, protocols, infrastructures, and economic arrangements—now function like environments rather than tools. They shape behavior the way physical ecosystems once did, producing a world where independence and domestication blur and older distinctions (freelancer vs employee, institutional vs independent life) lose meaning.

A central idea is that earlier waves of technological change temporarily opened spaces for autonomy and experimentation, but those spaces tend to get re-domesticated as institutions, markets, and cultural norms adapt. What felt like “wild” creative freedom becomes routinized, monetized, and normalized. The result is a recurring cycle: new techno-social frontiers generate feral modes of life, which are then stabilized into predictable, managed systems that reduce risk and novelty.

“New ferality” therefore means cultivating forms of agency suited to this environment rather than nostalgically seeking older kinds of freedom. It involves operating in systems whose constraints are embedded in code, protocols, and economic feedback loops—conditions that are as hard to ignore as physical laws. These regimes behave like ecosystems: they reward certain behaviors, penalize others, and gradually rewire identities and institutions.

The essay’s broader claim is that modern life demands new survival literacies. Instead of resisting technological domestication or celebrating it, individuals must learn to navigate a hybrid reality where the artificial is permanent and environmental. “Ferality” becomes a stance: adaptive, experimental, and opportunistic within engineered worlds—less about escaping civilization and more about developing wildness inside it.