I want to pass on the final paragraphs of a
recent commentary by Venkatesh Rao on the tragedy of the Titan submersible, which was a consequence of Stockton Rush, the CEO of OceanGate Expeditions, taking a number of design risks to reduce costs and increase profits. The bulk of Rao's piece deals with issues in the design of potentially dangerous new technologies, and the final paragraphs deal with managing the risks of artificial intelligence in pragmatic ways.
...AI risk, understood as something very similar to ordinary kinds of engineering risk (such as the risk of submersibles imploding), is an important matter, but lurid theological conceptions of AI risk and “alignment” are a not-even-wrong basis for managing it. The Titan affair, as an object lesson in traditional risk-management, offers many good lessons for how to manage real AI risks in pragmatic ways.
But there’s another point, a novel one, that is present in the case of AI that I don’t think has ever been present in technological leaps of the past.
AI is different from other technologies in that it alters the felt balance between knowledge and incomprehension that shapes our individual and collective risk-taking in the world.
AIs are already very good at embodying knowledge, and better at explaining many complex matters than most humans. But they are not yet very good at embodying doubt and incomprehension. They structurally lack epistemic humility and the ability to act on a consciousness of ignorance in justifiably arbitrary ways (ie on the basis of untheorized conservative decision principles backed by a track record). This is something bureaucratic standards bodies do very well. It is something that “software bureaucracies” (such as RLFH — reinforcement learning with human feedback) don’t do very well at all. The much demonized (by the entrepreneurial class) risk-aversion of bureaucrats is also a kind of ex-officio epistemic humility that is an essential ingredient of technology ecosystems.
On the flip side, AI itself is currently a technology full of incomprehensibilities. We understand the low-level mechanics of graph weights, gradient descents, backpropagation, and matrix multiplications. We do not understand how that low-level machinery produces the emergent outcomes it does. Our incomprehensions about AI are comparable to our incomprehensions about our own minds. This makes them extremely well-suited (impedance matched) to being bolted onto our minds as cognitive prosthetics that feel very comfortable, increase our confidence about what we think we know, and turn into extensions of ourselves (this is not exactly surprising, given that they are trained on human-generated data).
As with submersibles, we are at an alchemical state of understanding with AIs, but because of the nature of the technology itself, we might develop a prosthetic overconfidence in our state of indirect knowledge about the world, via AI.
AI might turn all humans who use it into Stockton Rushes.
The risk that AIs might destroy us is in science-fictional ways is overblown, but the risk that they might tempt us into generalized epistemic overconfidence, and systematically blind us to our incomprehensions, leading us to hurt ourselves in complex ways, is probably not sufficiently recognized.
Already, masses of programmers are relying on AIs like Github Copilot, and acting with a level of confidence in generated code that is likely not justified. AI-augmented programmers, even if sober and cautious as unaugmented individuals, might be taking Stockton-Rush type risks due to the false confidence induced by their tools. I don’t know that this is true, but the reports I see about people being 10x more productive and taking pleasure in programming again strike me as warning signs. I suspect there might be premature aestheticization going on here.
And I suspect it will take a few AI-powered Titan-like tragedies for us to wise-up and do something about it.
One way to think about this risk is by analogy to WMDs. Most people think nuclear weapons when they hear the phrase, but perhaps the most destructive WMD in the world is cheap and highly effective small arms, which have made conflicts far deadlier in the last century, and killed way more humans in aggregate than nuclear weapons.
You do not need to worry about a single AI going “AGI” and bring God-like catastrophes of malice or indifference down upon us. We lack the “nuclear science” to make that sort of thing happen. But you do need to be worried about millions of ordinary humans, drawn into drunken overconfidence by AI tools, wreaking the kind of havoc small arms do.
No comments:
Post a Comment