Showing posts with label culture/future. Show all posts
Showing posts with label culture/future. Show all posts

Monday, January 05, 2026

Venkatesh Rao's 'World Machines'

As usual, I found Venkatesh Rao's writinting in his latest piece "The Divergence Machine" a bit opaque and wordy, so I asked ChatGPT 5.2 to help me out with a condensation and summary, which I archive and pass on to readers here:

Here’s a condensed and clear summary of The Divergence Machine by Venkatesh Rao (Contraptions, Jan 5 2026):

Core Thesis
Rao introduces the concept of the divergence machine as a successor world-machine to the modernity machine explored in his 2025 book club series. The divergence machine’s logic has been built from ~1600–2000 and has fully operated only for about 25 years. It doesn’t reverse modernity but supersedes its organizing principles. Contraptions

World Machines Framework

  • World machines are conceptual contraptions that embody how the world orders itself in particular epochs.

  • In any period, one machine operates, one is being built, and another may be declining or dying.

  • The modernity machine (constructed 1200–1600, dominant 1600–2000) focused on legibility and convergence—making people, systems, and ideas enumerable, interoperable, and canonical.

  • The divergence machine rests on a different logic: variety, expansion of differences, and decentralized relationality rather than centralized canonicity. Contraptions

How the Divergence Machine Works

  • It does not aim to dismantle modernity directly nor is it just “postmodernity.”

  • It takes for granted many effects of the modernity machine (e.g., global connectivity) but operates in spaces where modernity’s centralized canon no longer determines outcomes.

  • Its defining move is to make canonicity irrelevant, focusing instead on pervasive diversity and plural modes of being that coexist without unified narratives. Contraptions

Phenomenology and Method
Rao suggests readers filter phenomena through several lenses to see if they belong to the divergence machine rather than to other world processes like:

  • Late modernity (zombie persistence of modern norms),

  • Postmodernity (reaction/critique of modernity),

  • Metamodernity (piecemeal modern revival). Contraptions

Guiding Questions for Divergence
Rao proposes probing events/processes for traits such as:

  • Plurality over convergence,

  • Relativized canon,

  • Generative variety,

  • Coexistence without closure,

  • Emergent “liveness” not reducible to modern or postmodern categories. Contraptions

Practical Context
This piece sets up the 2026 Contraptions Book Club theme, with readings (e.g., Candide, The Underground Empire, philosophical tensions among Enlightenment figures) intended to explore the formative periods and idea-spaces relevant to the divergence machine’s logic. Contraptions

Big Picture Takeaway
Rao’s divergence machine is a conceptual model for understanding the contemporary world not as a breakdown of modernity but as a phase transition where plurality and decentralized, non-canonical relationality become the driving logic of civilization, distinct from both modern and postmodern frameworks. Contraptions



Friday, December 26, 2025

What are humans for? - AI Forces a Question We’ve Been Dodging

AI Is Already Fusing With Us — The Only Question Is How

The argument about whether AI will enhance or diminish our our humanity usually starts in the wrong place. People argue about what AI should do before asking a more basic and uncomfortable question: what are humans for?

Every powerful technology embeds an answer to that question. AI is no exception. In fact, it may be the most consequential case yet, because AI is not just a tool we use—it is a system that increasingly co-thinks with us, shapes our attention, edits our language, nudges our choices, and quietly rewires our sense of agency.

We are not facing a future of humans versus machines. We are already living in a world of human–machine fusion.

The real issue is not whether this fusion will happen. It is what kind of fusion it will be.

There Is No Such Thing as “Human Values”

Much of the public discussion of AI ethics rests on a fantasy: that “human values” exist as a coherent, global, agreed-upon set of principles that AI can be aligned with.

They don’t.

Value, purpose, and meaning are social constructions. They vary radically across cultures, religions, and political systems—and they always have. What one society calls a meaningful life, another calls wasted. What one treats as sacred, another treats as irrelevant or dangerous.

There is no global agreement on what humans are for. Expecting a worldwide technology like AI to be guided by a single, shared vision of human purpose is naïve at best, dishonest at worst.

There Will Be Many AIs, Just as There Are Many Religions

Because there is no single answer to what humans are for, there will be no single AI.

There will be many forms of AI, each reflecting the values—explicit or implicit—of the cultures, institutions, and power structures that create them. Some will be optimized for surveillance, compliance, and efficiency. Others will be built to extend memory, imagination, and self-understanding.

In blunt terms: some AIs will enslave us; others could help liberate us.

The enslaving versions will treat humans as components to be managed—predictable, optimizable, correctable. Friction will be treated as inefficiency. Deviance as error. Interior life as noise.

The liberating versions will function as prostheses for the mind—tools that expand rather than replace human capacities. They will support reflection rather than manipulation, agency rather than control, curiosity rather than conformity.

The difference is not technical. It is philosophical.

AI as Prosthesis vs. AI as Manager

We already accept prostheses for the body—glasses, hearing aids, pacemakers—because they restore or extend human function without claiming authority over the person. AI can play a similar role for cognition: augmenting memory, pattern recognition, synthesis, and imagination.

But there is a sharp boundary here.

A cognitive prosthesis supports a human agent.
A cognitive manager displaces one.

The danger is not that AI becomes intelligent. The danger is that it becomes normative—quietly deciding what matters, what counts, what is worth attention, and what is not.

Once that happens, humans do not disappear. They persist as interfaces.

The Question We Can’t Outsource

AI forces us to confront a question we have long evaded: what kind of humans do we want to be when much of our thinking is no longer done alone?

There will be no final answer. There never has been. Meaning has always been plural, unstable, and contested.

But refusing to ask the question does not keep us neutral. It simply allows markets, states, and optimization systems to answer it for us—implicitly, automatically, and without appeal.

AI will fuse with us. That is already happening.

The only choice left is whether we fuse as authors of our lives—or as managed systems optimized for goals we did not choose.

***********************************

The above text is the response of ChatGPT5.2 to a prompt asking it to develop ideas in a long paragraph I wrote into a potential MindBlog post, followed by a second prompt asking it to alter its first excellent response into a shorter and more polemical version.  

 

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Friday, October 31, 2025

Is AI Pushing Us Back to a Pre-Gutenberg Culture? - A Rethink of Creativity in the AI Age

Here I pass on a link to both video and a transcript of a striking Alex Dobrenko interview with Venkatesh Rao that I have just read,  and here is ChatGPT’s summary of the main points that Rao makes. I suggest that you watch or read the entire interview.  

Key Ideas
    1.    Individual Authorship as a Recent Invention
    ◦    Rao argues that the notion of the individual creative author (e.g., “I wrote a novel”) is historically quite recent and culturally specific. 
    ◦    In the age of large language models, this individual-authorship model is being challenged: instead of one “genius,” creativity is increasingly collective, networked, and mediated by big models.
    ◦    The implication: critics who cling to the “author as lone genius” narrative may be missing structural shifts in how creativity happens when AI is involved.
    2.    AI as Channel for Cultural Inheritance, Not Just Plagiarism Machines
    ◦    Rao suggests that we should view LLMs (large language models) as channels or amplifiers of shared cultural material, rather than simply plagiaristic tools. 
    ◦    The idea is: humans and machines together remix, iterate, and transform cultural inheritance. The key question shifts from “Who owns the output?” to “How is the inheritance being transformed, and what risks/trade-offs are involved?”
    ◦    This reframing undercuts certain AI-critique tropes (e.g., “AI steals from human authors”) by changing focus to how culture itself is processed.
    3.    “Creative Work” Isn’t Merely Labor and the Myth of Effort = Value
    ◦    Rao pushes back on the assumption that creative work is like labour in a factory: “I spend more hours, therefore my output has more value.” Instead he argues that risk (taking chances, doing something unusual) is more central to originality and creative value than sheer effort. 
    ◦    For him, much of the “AI slop” (mediocre output) is simply low-risk human behavior — safe, predictable, derivative. Real value comes when someone (human + tool) takes a risk, changes pattern, introduces novelty.
    4.    Why “AI Alignment” Became PR-Speak
    ◦    Rao critiques the dominant narrative of “AI alignment” as being overly focused on controlling the technology rather than rethinking what we mean by intelligence, creativity, personhood, and risk in a post-AI context. 
    ◦    He suggests the alignment framing becomes a kind of packaging for fear/PR rather than a deep reframing of the underlying issues of agency and cultural change.
    5.    Writing with AI: Ideation, Play, Lego-Style Construction
    ◦    Rao gives insight into his own practice: he uses AI as a “lego” set for ideation — playing with fragments, assembling possible combinations, experimenting, rather than treating the model as a ghostwriter. 
    ◦    This reflects his broader point that human-AI collaboration is less about outsourcing and more about amplifying risk + novelty.
    6.    Disclosure of AI Use Will Soon Disappear as a Meaningful Signal
    ◦    Rao predicts that explicitly disclosing “I used AI” in your work will become less meaningful or even counter-productive, because so much of the creative and cultural infrastructure will be AI-mediated anyway. 
    ◦    The more interesting question becomes not whether AI was used, but how it was used — what choices humans made in the process, what risks were taken, what novelty resulted.
    7.    Why AI Pushing Us Back to a Pre-Gutenberg Culture
    ◦    Rao frames AI’s arrival as pushing us into a different cultural regime, comparable in some ways to the shift after the printing press (the “Gutenberg” moment). 
    ◦    In that sense, the critics who treat AI as simply another tool may be missing the deeper structural changes: changes in authority, authorship, culture, transmission of knowledge.

  

Wednesday, August 27, 2025

AI is a Mass-Delusion Event - and - Gen Z and the End of Predictable Progress

I want to recommend two articles whose titles are this post's title, the first by Charlie Warzel in The Atlantic and the second by Kyla Scanlon in her Substack newsletter.   Here is final portion of Warzel's essay:

What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures. 

Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.

 

Monday, August 04, 2025

The coming societal collapse

I want to pass on this review in The Guardian by Damian Carrington, pointed out to me by a friend, titled   ‘Self-termination is most likely: the history and future of societal collapse.' It describes the just released "Goliath’s Curse" by Luke Kemp published in the UK on 31 July by Viking Penguin

An epic analysis of 5,000 years of civilisation argues that a global collapse is coming unless inequality is vanquished

“We can’t put a date on Doomsday, but by looking at the 5,000 years of [civilisation], we can understand the trajectories we face today – and self-termination is most likely,” says Dr Luke Kemp at the Centre for the Study of Existential Risk at the University of Cambridge.

“I’m pessimistic about the future,” he says. “But I’m optimistic about people.” Kemp’s new book covers the rise and collapse of more than 400 societies over 5,000 years and took seven years to write. The lessons he has drawn are often striking: people are fundamentally egalitarian but are led to collapses by enriched, status-obsessed elites, while past collapses often improved the lives of ordinary citizens.

Today’s global civilisation, however, is deeply interconnected and unequal and could lead to the worst societal collapse yet, he says. The threat is from leaders who are “walking versions of the dark triad” – narcissism, psychopathy and Machiavellianism – in a world menaced by the climate crisis, nuclear weapons, artificial intelligence and killer robots.

The work is scholarly, but the straight-talking Australian can also be direct, such as when setting out how a global collapse could be avoided. “Don’t be a dick” is one of the solutions proposed, along with a move towards genuinely democratic societies and an end to inequality.

His first step was to ditch the word civilisation, a term he argues is really propaganda by rulers. “When you look at the near east, China, Mesoamerica or the Andes, where the first kingdoms and empires arose, you don’t see civilised conduct, you see war, patriarchy and human sacrifice,” he says. This was a form of evolutionary backsliding from the egalitarian and mobile hunter-gatherer societies which shared tools and culture widely and survived for hundreds of thousands of years. “Instead, we started to resemble the hierarchies of chimpanzees and the harems of gorillas.”

Instead Kemp uses the term Goliaths to describe kingdoms and empires, meaning a society built on domination, such as the Roman empire: state over citizen, rich over poor, master over slave and men over women. He says that, like the biblical warrior slain by David’s slingshot, Goliaths began in the bronze age, were steeped in violence and often surprisingly fragile.

Goliath states do not simply emerge as dominant cliques that loot surplus food and resources, he argues, but need three specific types of “Goliath fuel”. The first is a particular type of surplus food: grain. That can be “seen, stolen and stored”, Kemp says, unlike perishable foods.

In Cahokia, for example, a society in North America that peaked around the 11th century, the advent of maize and bean farming led to a society dominated by an elite of priests and human sacrifice, he says.

The second Goliath fuel is weaponry monopolised by one group. Bronze swords and axes were far superior to stone and wooden axes, and the first Goliaths in Mesopotamia followed their development, he says. Kemp calls the final Goliath fuel “caged land”, meaning places where oceans, rivers, deserts and mountains meant people could not simply migrate away from rising tyrants. Early Egyptians, trapped between the Red Sea and the Nile, fell prey to the pharaohs, for example.

“History is best told as a story of organised crime,” Kemp says. “It is one group creating a monopoly on resources through the use of violence over a certain territory and population.”

All Goliaths, however, contain the seeds of their own demise, he says: “They are cursed and this is because of inequality.” Inequality does not arise because all people are greedy. They are not, he says. The Khoisan peoples in southern Africa, for example, shared and preserved common lands for thousands of years despite the temptation to grab more.

Instead, it is the few people high in the dark triad who fall into races for resources, arms and status, he says. “Then as elites extract more wealth from the people and the land, they make societies more fragile, leading to infighting, corruption, immiseration of the masses, less healthy people, overexpansion, environmental degradation and poor decision making by a small oligarchy. The hollowed-out shell of a society is eventually cracked asunder by shocks such as disease, war or climate change.”

History shows that increasing wealth inequality consistently precedes collapse, says Kemp, from the Classical Lowland Maya to the Han dynasty in China and the Western Roman empire. He also points out that for the citizens of early rapacious regimes, collapse often improved their lives because they were freed from domination and taxation and returned to farming. “After the fall of Rome, people actually got taller and healthier,” he says.

Collapses in the past were at a regional level and often beneficial for most people, but collapse today would be global and disastrous for all. “Today, we don’t have regional empires so much as we have one single, interconnected global Goliath. All our societies act within one single global economic system – capitalism,” Kemp says.

He cites three reasons why the collapse of the global Goliath would be far worse than previous events. First is that collapses are accompanied by surges in violence as elites try to reassert their dominance. “In the past, those battles were waged with swords or muskets. Today we have nuclear weapons,” he says.

Second, people in the past were not heavily reliant on empires or states for services and, unlike today, could easily go back to farming or hunting and gathering. “Today, most of us are specialised, and we’re dependent upon global infrastructure. If that falls away, we too will fall,” he says.

“Last but not least is that, unfortunately, all the threats we face today are far worse than in the past,” he says. Past climatic changes that precipitated collapses, for example, usually involved a temperature change of 1C at a regional level. Today, we face 3C globally. There are also about 10,000 nuclear weapons, technologies such as artificial intelligence and killer robots and engineered pandemics, all sources of catastrophic global risk.

Kemp says his argument that Goliaths require rulers who are strong in the triad of dark traits is borne out today. “The three most powerful men in the world are a walking version of the dark triad: Trump is a textbook narcissist, Putin is a cold psychopath, and Xi Jinping came to rule [China] by being a master Machiavellian manipulator.”

“Our corporations and, increasingly, our algorithms, also resemble these kinds of people,” he says. “They’re basically amplifying the worst of us.”

Kemp points to these “agents of doom” as the source of the current trajectory towards societal collapse. “These are the large, psychopathic corporations and groups which produce global catastrophic risk,” he says. “Nuclear weapons, climate change, AI, are only produced by a very small number of secretive, highly wealthy, powerful groups, like the military-industrial complex, big tech and the fossil fuel industry.

“The key thing is this is not about all of humanity creating these threats. It is not about human nature. It is about small groups who bring out the worst in us, competing for profit and power and covering all [the risks] up.”

The global Goliath is the endgame for humanity, Kemp says, like the final moves in a chess match that determine the result. He sees two outcomes: self-destruction or a fundamental transformation of society.

He believes the first outcome is the most likely, but says escaping global collapse could be achieved. “First and foremost, you need to create genuine democratic societies to level all the forms of power that lead to Goliaths,” he says. That means running societies through citizen assemblies and juries, aided by digital technologies to enable direct democracy at large scales. History shows that more democratic societies tend to be more resilient, he says.

“If you’d had a citizens’ jury sitting over the [fossil fuel companies] when they discovered how much damage and death their products would cause, do you think they would have said: ‘Yes, go ahead, bury the information and run disinformation campaigns’? Of course not,” Kemp says.

Escaping collapse also requires taxing wealth, he says, otherwise the rich find ways to rig the democratic system. “I’d cap wealth at $10 million. That’s far more than anyone needs. A famous oil tycoon once said money is just a way for the rich to keep score. Why should we allow these people to keep score at the risk of destroying the entire planet?”

If citizens’ juries and wealth caps seem wildly optimistic, Kemp says we have been long brainwashed by rulers justifying their dominance, from the self-declared god-pharaohs of Egypt and priests claiming to control the weather to autocrats claiming to defend people from foreign threats and tech titans selling us their techno-utopias. “It’s always been easier to imagine the end of the world than the end of Goliaths. That’s because these are stories that have been hammered into us over the space of 5,000 years,” he says.

“Today, people find it easier to imagine that we can build intelligence on silicon than we can do democracy at scale, or that we can escape arms races. It’s complete bullshit. Of course we can do democracy at scale. We’re a naturally social, altruistic, democratic species and we all have an anti-dominance intuition. This is what we’re built for.”

Kemp rejects the suggestion that he is simply presenting a politically leftwing take on history. “There is nothing inherently left wing about democracy,” he says. “Nor does the left have a monopoly on fighting corruption, holding power accountable and making sure companies pay for the social and environmental damages they cause. That’s just making our economy more honest.”

He also has a message for individuals: “Collapse isn’t just caused by structures, but also people. If you want to save the world then the first step is to stop destroying it. In other words: don’t be a dick. Don’t work for big tech, arms manufacturers or the fossil fuel industry. Don’t accept relationships based on domination and share power whenever you can.”

Despite the possibility of avoiding collapse, Kemp remains pessimistic about our prospects. “I think it’s unlikely,” he says. “We’re dealing with a 5,000-year process that is going to be incredibly difficult to reverse, as we have increasing levels of inequality and of elite capture of our politics.

“But even if you don’t have hope, it doesn’t really matter. This is about defiance. It’s about doing the right thing, fighting for democracy and for people to not be exploited. And even if we fail, at the very least, we didn’t contribute to the problem.”


Monday, June 23, 2025

MIT study - Our brains can accumulate cognitive debt by using AI for writing tasks

I pass on the abstract of a multiauthor work from MIT. Undergrads, EEG caps on, wrote three 20-minute essays. Those who leaned on GPT-4o showed weaker alpha-beta coupling, produced eerily similar prose, and later failed to quote their own sentences. The next MindBlog post relays a commentary on and critique of this work. 

With today's wide adoption of LLM products like ChatGPT from OpenAI, humans and
businesses engage and use LLMs on a daily basis. Like any other tool, it carries its own set of
advantages and limitations. This study focuses on finding out the cognitive cost of using an LLM
in the educational context of writing an essay.

We assigned participants to three groups: LLM group, Search Engine group, Brain-only group,
where each participant used a designated tool (or no tool in the latter) to write an essay. We
conducted 3 sessions with the same group assignment for each participant. In the 4th session
we asked LLM group participants to use no tools (we refer to them as LLM-to-Brain), and the
Brain-only group participants were asked to use LLM (Brain-to-LLM). We recruited a total of 54
participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

We used electroencephalography (EEG) to record participants' brain activity in order to assess
their cognitive engagement and cognitive load, and to gain a deeper understanding of neural
activations during the essay writing task. We performed NLP analysis, and we interviewed each
participant after each session. We performed scoring with the help from the human teachers
and an AI judge (a specially built AI agent).

We discovered a consistent homogeneity across the Named Entities Recognition (NERs),
n-grams, ontology of topics within each group. EEG analysis presented robust evidence that
LLM, Search Engine and Brain-only groups had significantly different neural connectivity
patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down
with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging
networks, Search Engine group showed intermediate engagement, and LLM assistance elicited
the weakest overall coupling. In session 4, LLM-to-Brain participants showed weaker neural
connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM
participants demonstrated higher memory recall, and re‑engagement of widespread
occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one
frequently perceived in the Search Engine group. The reported ownership of LLM group's
essays in the interviews was low. The Search Engine group had strong ownership, but lesser
than the Brain-only group. The LLM group also fell behind in their ability to quote from the
essays they wrote just minutes prior.

As the educational impact of LLM use only begins to settle with the general population, in this
study we demonstrate the pressing matter of a likely decrease in learning skills based on the
results of our study. The use of LLM had a measurable impact on participants, and while the
benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM
group's participants performed worse than their counterparts in the Brain-only group at all levels:
neural, linguistic, scoring.

We hope this study serves as a preliminary guide to understanding the cognitive and practical
impacts of AI on learning environments. 

Thursday, December 26, 2024

Oliver Sacks - The Machine Stops

 

A slightly edited MindBlog post from 2019 worth another read:

I want to point to a wonderful short essay written by Oliver Sacks before his death from cancer, in which he notes the parallels between the modern world he sees around him and E.M.Forster's prescient classic 1909 short story "The Machine Stops," in which Forster imagined a future in which humans lived in separate cells, communicating only by audio and visual devices (much like today the patrons of a bar at happy hour are more likely to looking at their cells phones than chatting with each other.) A few clips:

I cannot get used to seeing myriads of people in the street peering into little boxes or holding them in front of their faces, walking blithely in the path of moving traffic, totally out of touch with their surroundings. I am most alarmed by such distraction and inattention when I see young parents staring at their cell phones and ignoring their own babies as they walk or wheel them along. Such children, unable to attract their parents’ attention, must feel neglected, and they will surely show the effects of this in the years to come.
I am confronted every day with the complete disappearance of the old civilities. Social life, street life, and attention to people and things around one have largely disappeared, at least in big cities, where a majority of the population is now glued almost without pause to phones or other devices—jabbering, texting, playing games, turning more and more to virtual reality of every sort.
I worry more about the subtle, pervasive draining out of meaning, of intimate contact, from our society and our culture. When I was eighteen, I read Hume for the first time, and I was horrified by the vision he expressed in his eighteenth-century work “A Treatise of Human Nature,” in which he wrote that mankind is “nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement.” As a neurologist, I have seen many patients rendered amnesic by destruction of the memory systems in their brains, and I cannot help feeling that these people, having lost any sense of a past or a future and being caught in a flutter of ephemeral, ever-changing sensations, have in some way been reduced from human beings to Humean ones.
I have only to venture into the streets of my own neighborhood, the West Village, to see such Humean casualties by the thousand: younger people, for the most part, who have grown up in our social-media era, have no personal memory of how things were before, and no immunity to the seductions of digital life. What we are seeing—and bringing on ourselves—resembles a neurological catastrophe on a gigantic scale.
I see science, with its depth of thought, its palpable achievements and potentials, as equally important; and science, good science, is flourishing as never before, though it moves cautiously and slowly, its insights checked by continual self-testing and experimentation. I revere good writing and art and music, but it seems to me that only science, aided by human decency, common sense, farsightedness, and concern for the unfortunate and the poor, offers the world any hope in its present morass. This idea is explicit in Pope Francis’s encyclical and may be practiced not only with vast, centralized technologies but by workers, artisans, and farmers in the villages of the world. Between us, we can surely pull the world through its present crises and lead the way to a happier time ahead. As I face my own impending departure from the world, I have to believe in this—that mankind and our planet will survive, that life will continue, and that this will not be our final hour.