I want to pass on some selected clips from a marvelous articles by Kwame Anthony Appiah in The Atlantic titled "The Age of De-Skilling - Will AI stretch our minds - or stunt them?" Appiah is a professor of philosophy and law at New York University. Here are the clips:
Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can’t supply ourselves. Scale turned social exchange into systemic interdependence.
The result is a world in which, in a classic example, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers—an invisible network of crafts behind even the simplest object.
The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation—a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40.
...most modern work is collaborative, and the arrival of AI hasn’t changed that. The issue isn’t how humans compare to bots but how humans who use bots compare to those who don’t.
In other domains, the more skillful the person, the more skillful the collaboration—or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine’s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they’d missed. With the birds, the duo of human and bot beat either alone.
The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it—checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.
That, more and more, is what “humans in the loop” has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call—noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It’s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.
More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship—prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn’t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don’t yet have names for.
The hard part is deciding , without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to “do the math.” Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.
Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward—from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI—a statistical condensation of human knowledge—is simply the latest chapter in our long apprenticeship to our own inventions.
The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What’s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides—judgment, imagination, understanding—stay alive in us. If there’s one skill we can’t afford to lose, it’s the skill of knowing which of them matter.
No comments:
Post a Comment