I have enjoyed reading a recent essay by Aneesh Sathe, which I recommend you read in full. I will pass on here the equally amazing job that Claude Sonnet 4.6 did in summarizing its main points, then responding to my editing and emphasis suggestions, and also adding some additional ideas on the relevant neurobiological substrate: "the last biological moat" - what is left for the human mind to do. Here is the result, which begins with the title of Sathe's essay:
The Lightening of Intent: When Execution Gets Cheap, What Remains Scarce?
Aneesh Sathe's essay "The Lightening of Intent" is one of the more intellectually disciplined takes on AI's civilizational meaning I've encountered — grounded in energy economics, manuscript history, and complexity theory rather than hype. It builds explicitly on Venkatesh Rao's "World Machines" framework, published just this week, and the two pieces read best together.
The World Machines Backdrop
Rao's framework organizes civilizational history into overlapping "machines" — planetarities, each with a nominal lifespan of about a millennium, cycling through Dawn, Day, and Dusk phases. Currently, the Modernity Machine is entering its Dusk stage, the Divergence Machine has reached its Day stage, and the Liveness Machine has just been born into its Dawn.
The Liveness Machine is only being born now because real AI has emerged. The most leveraged use of energy, whether renewable or not, will be to power AI. And AI will animate a planet-scale Liveness Machine — whether it is a grimdark or solarpunk version is yet to be determined.
Sathe's essay fills in the economic and physical mechanisms underneath that historical arc.
The Core Argument
The cost of putting an idea into the world has fallen by roughly five orders of magnitude over the last millennium. The bottleneck has reversed: arranging atoms used to be the hard part; now, having ideas is. Soon, it will be intents.
The Codex Amiatinus — the oldest complete Latin Bible — is Sathe's anchor image. It weighed about seventy-five pounds, required close to one thousand calfskins, cost years of scribal labor from sixty monks, and the life of the abbot who carried it toward Rome in 716 CE. Today, a blog post costs nothing and reaches more readers in an afternoon.
The Numbers Worth Noting
Manuscript-to-print transition:
- Pre-print Europe held fewer than five million manuscripts; the sixteenth century produced two hundred million printed books, the eighteenth a billion.
- Gutenberg produced a hundred and eighty Bibles in the time a scriptorium managed one. Book prices fell 2.4 percent per year for over a century; each new printer in a city dropped prices by another quarter.
- The doubling time for European book production collapsed from roughly 104 years before 1450 to 43 years after.
Energy rate density (Chaisson's framework): This quantity — free energy flow per unit mass in ergs per second per gram — rises monotonically with complexity: galaxies ≈ 0.5; stars ≈ 2; planets ≈ 75; plants ≈ 900; animals ≈ 20,000; the human brain ≈ 150,000; modern human society in aggregate ≈ 500,000 — the most energy-dense phenomenon known. AI will push this higher still.
Per-capita energy consumption: It has risen from about two thousand kilocalories per day in the Paleolithic — all of it food — to two hundred and thirty thousand in the modern United States.
Energy return on investment (EROI):
- Modern agriculture requires 13.3 calories of fossil-fuel input per calorie of food consumed.
- Fossil fuels at the useful-energy stage return only about 3.5 calories per calorie invested; road transport, 1.6 to 1. The estimated minimum EROI for a complex society is about 5 to 1.
- Solar PV costs have fallen from $106 per watt in 1976 to under $0.10 today — a 1,300-fold decline in under fifty years — with an estimated useful-stage energy return of 25 to 30:1, seven to nine times higher than fossil fuels.
Data accumulation: The internet holds something on the order of two hundred zettabytes by 2026, mostly text and image, mostly read by machines. Roughly ninety percent of all data ever created has been generated in the last two years.
Key Conceptual Moves
The substrate-spark distinction. Sathe draws an analogy to the prebiotic ocean: the pre-life ocean held amino acids and nucleotides for hundreds of millions of years before anything used the accumulation. The chemistry was not the difference; what mattered was that something started to act on it. Data without intent is a soup of records that accumulates and forgets. LLMs are the first time the substrate has been wired to a borrowed spark of human intent — which maps closely to what Rao calls the Liveness Machine's defining property: AI is oozy, like a primordial soup that harbors intensely reactive chemistry.
Atoms downstream. The HTTP standard, written as a specification in the early 1990s in some weeks, has restructured several trillion dollars of physical economic activity over thirty years. The atoms moved themselves. The direction of causation between ideas and matter has inverted.
The auteur mode. A bench scientist in 2026 submits a query to a generative model and receives a thousand candidate molecules in twenty minutes; her job is no longer to generate, it is to pick. Taste, selection, and direction become the scarce inputs. Rao frames this as "execution pull" — AI drawing us out from vita contemplativa regimes into vita activa regimes.
Intents red in tooth and claw. As the substrate becomes more responsive, intent becomes the competitive variable. The first generation of intent-collisions is three to five years out; the shape of the era will be determined in that interval. Rao places this on a longer timescale: divergence will dominate in the short term (2–5 years) but liveness effects will compound more steadily and dominate in the long term (beyond 5 years).
The energy caveat. The whole argument rides on an energy transition. If the solar transition holds, the Liveness era inherits a re-powered version of the Modernity Machine's infrastructure, sustained on incoming sunlight rather than deposited carbon. If the transition does not hold, the substrate degrades faster than the intent-driven economy can mature, and the lightening of intent ends as a brief anomaly. Both outcomes are within reach.
Why This Matters
Sathe and Rao together make a tightly nested argument: civilization is a thermodynamic system that keeps burning hotter; each energy-surplus step builds infrastructure that amplifies individual intention; AI is the latest and sharpest such amplifier; and the emerging bottleneck is not execution but what you actually want. For those of us who have spent careers thinking about the neural substrates of agency and intention, the question has an obvious next layer: what, neurobiologically, is the capacity that remains scarce when everything else gets cheap? Sitting with confusion long enough for clarity to emerge — Sathe's phrase — sounds a lot like what the prefrontal cortex does when it holds competing representations in working memory and waits for resolution. That may be the last purely biological moat.
Sathe's companion essay, "The Viscous Frontier", takes up how to act in this regime — with attention as your constraint and no canonical direction pulling. Rao's full World Machines archive is at Contraptions.
The Last Biological Moat: Intention as Prediction Error Suppression
Sathe's claim that sitting with confusion long enough for clarity to emerge remains irreducibly human invites a neuroscientific gloss. In Friston's active inference framework, intentional action is not the initiation of a motor command but the suppression of prediction error about a desired future state. The brain generates a model of how the world should be — the goal — and then acts to make sensory input conform to that model, minimizing the divergence between predicted and actual states. What Sathe calls "formulating a direction" is, in these terms, the construction and stabilization of a prior over future states: the brain committing, against competing attractors, to one preferred trajectory through state space. This is metabolically and computationally expensive precisely because it requires holding an unresolved representation in working memory — prefrontal cortex sustaining an active prior — while suppressing the pull of more immediately rewarding or more habitual alternatives. The "confusion" phase is not inefficiency; it is the system sampling the landscape before locking the prior. AI systems, by contrast, have no intrinsic priors about what they want the world to be. They are extraordinarily powerful at executing on a prior once supplied, but the prior itself — the intent — must come from outside the model. This is why Sathe's bottleneck and Friston's framework converge on the same point: what remains scarce, and stubbornly biological, is the capacity to generate a stable, motivationally loaded model of a preferred future and hold it long enough to act. Everything downstream of that — the scribal labor, the printing press, the HTTP spec, the generative model — is infrastructure for carrying the prior into the world. The infrastructure keeps getting cheaper and more powerful. The prior still has to come from somewhere.