The real risk of an A.G.I.... may stem not from malice, or emergent self-consciousness, but simply from autonomy. Intelligence entails control, and an A.G.I. will be the apex cogitator. From this perspective, an A.G.I., however well intentioned, would likely behave in a way as destructive to us as any Bond villain. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” Bostrom writes in his 2014 book, “Superintelligence,” a closely reasoned, cumulatively terrifying examination of all the ways in which we’re unprepared to make our masters. A recursive, self-improving A.G.I. won’t be smart like Einstein but “smart in the sense that an average human being is smart compared with a beetle or a worm.” How the machines take dominion is just a detail: Bostrom suggests that “at a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe.” That sounds screenplay-ready—but, ever the killjoy, he notes, “In particular, the AI does not adopt a plan so stupid that even we present-day humans can foresee how it would inevitably fail. This criterion rules out many science fiction scenarios that end in human triumph.”
If we can’t control an A.G.I., can we at least load it with beneficent values and insure that it retains them once it begins to modify itself? Max Tegmark observes that a woke A.G.I. may well find the goal of protecting us “as banal or misguided as we find compulsive reproduction.” He lays out twelve potential “AI Aftermath Scenarios,” including “Libertarian Utopia,” “Zookeeper,” “1984,” and “Self-Destruction.” Even the nominally preferable outcomes seem worse than the status quo. In “Benevolent Dictator,” the A.G.I. “uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that’s really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful.” And more or less indistinguishable from highly immersive video games or a simulation.
Trying to stay optimistic, by his lights—bear in mind that Tegmark is a physicist—he points out that an A.G.I. could explore and comprehend the universe at a level we can’t even imagine. He therefore encourages us to view ourselves as mere packets of information that A.I.s could beam to other galaxies as a colonizing force. “This could be done either rather low-tech by simply transmitting the two gigabytes of information needed to specify a person’s DNA and then incubating a baby to be raised by the AI, or the AI could nanoassemble quarks and electrons into full-grown people who would have all the memories scanned from their originals back on Earth.” Easy peasy. He notes that this colonization scenario should make us highly suspicious of any blueprints an alien species beams at us. It’s less clear why we ought to fear alien blueprints from another galaxy, yet embrace the ones we’re about to bequeath to our descendants (if any).
A.G.I. may be a recurrent evolutionary cul-de-sac that explains Fermi’s paradox: while conditions for intelligent life likely exist on billions of planets in our galaxy alone, we don’t see any. Tegmark concludes that “it appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us.” Therefore, “to program a friendly AI, we need to capture the meaning of life.” Uh-huh.
In the meantime, we need a Plan B. Bostrom’s starts with an effort to slow the race to create an A.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.
So the plan, after we create our own god, would be to bow to it and hope it doesn’t require a blood sacrifice. An autonomous-car engineer named Anthony Levandowski has set out to start a religion in Silicon Valley, called Way of the Future, that proposes to do just that. After “The Transition,” the church’s believers will venerate “a Godhead based on Artificial Intelligence.” Worship of the intelligence that will control us, Levandowski told a Wired reporter, is the only path to salvation; we should use such wits as we have to choose the manner of our submission. “Do you want to be a pet or livestock?” he asked. I’m thinking, I’m thinking . . . ♦