Sunday, September 29, 2024

Tokens of sanity for anxious times

-Being the space in which nothing can hurry
-An animal body that pretends to be human
-Dissociating from the word cloud and emotional reactivities of self and other selves.
-A courteous guest in one’s own body and with others
-Awareness a dim glow highlighting different areas of the internal symphony
-Letting each moment be what it is, not what it should be
-Clinging to nothing, the current self being a passing fantasy.
-Being more curatorial than aspirational

-Favoring reflectivity over reactivity, caressing novelty

-Being a blip in the flow of cosmic time

Saturday, September 28, 2024

Networks of connectivity are the battleground of the future.

From Nathan Gardels, editor of Noema Magazine: "From Mass To Distributed Weapons Of Destruction" : 

The recent lethal attacks attributed to Israel that exploded pagers and walkie-talkies dispersed among thousands of Hezbollah militants announces a new capacity in the history of warfare for distributed destruction. Before the massive bombing raids that have since ensued, the terror-stricken population of Lebanon had been unplugging any device with batteries or a power source linked to a communication network for fear it might blow up in their faces.

The capability to simultaneously strike the far-flung tentacles of a network is only possible in this new era of connectivity that binds us all together. It stands alongside the first aerial bombing in World War I and the use of nuclear weapons by the U.S. in Japan at the end of World War II as a novel weapon of its technological times that will, sooner or later, proliferate globally.

Like these earlier inventions of warfare, the knowledge and technology that is at the outset the sole province of the clever first mover will inevitably spread to others with different, and even precisely opposite, interests and intentions. The genie is out of the bottle and can’t be put back. In time, it will be available to anyone with the wherewithal to summon it for their own purposes.

While Hezbollah reels, we can be sure that the defense establishments in every nation, from Iran to Russia, China and the U.S., are scrambling to get ahead of this new reality by seeking advantage over any adversary who is surely trying to do the same. 

Back in 1995, the Aum Shinrikyo cult released the deadly nerve agent, sarin, in a Tokyo subway, killing 13 and sickening some 5,500 commuters. In an interview at the time, the futurist Alvin Toffler observed that “what we’ve seen in Japan is the ultimate devolution of power: the demassification of mass-destruction weapons … where an individual or group can possess the means of mass destruction if he or she has the information to make them. And that information is increasingly available.”

Even that foresightful thinker could not envision then that not only can individuals or groups gain access to knowledge of the ways and means of mass destruction through information networks, but that the networks for accessing that knowledge and connecting individuals or groups can themselves serve as a delivery system for hostile intervention against their users.

Though the Israeli attacks reportedly involved low-tech logistical hacking of poorly monitored supply chains, it doesn’t take an AI scientist to see the potential of distributed warfare in today’s Internet of Things, where all devices are synced, from smartphones to home alarm systems to GPS in your car or at your bank’s ATM.

Ever-more powerful AI models will be able to algorithmically deploy programmed instructions back through the same network platforms from which they gather their vast amounts of data.

It is no longer a secret that the CIA and Israeli Mossad temporarily disabled Iran’s nuclear fuel centrifuges in 2009 by infecting their operating system with the Stuxnet malware. That such targeted attacks could also be scaled up and distributed across an array of devices through new AI tools is hardly a stretch of the imagination.

The writing, or code, is clearly on the wall after the Hezbollah attack. Dual-use networks will be weaponized as the battleground of the future. The very platforms that bring people together can also be what blows them apart.

 

 

Sunday, September 15, 2024

A caustic review of Yuval Harari's "Nexus"

I pass on the very cogent opinions of Dominic Green, fellow of the Royal Historical Society, that appeared in the Sept. 13 issue of the Wall Street Journal. He offers several  caustic comments on ideas offered in Yuval Harari's most recent book, "Nexus"

Groucho Marx said there are two types of people in this world: “those who think people can be divided up into two types, and those who don’t.” In “Nexus,” the Israeli historian-philosopher Yuval Noah Harari divides us into a naive and populist type and another type that he prefers but does not name. This omission is not surprising. The opposite of naive and populist might be wise and pluralist, but it might also be cynical and elitist. Who would admit to that?

Mr. Harari is the author of the bestselling “Sapiens,” a history of our species written with an eye on present anxieties about our future. “Nexus,” a history of our society as a series of information networks and a warning about artificial intelligence, uses a similar recipe. A dollop of historical anecdote is seasoned with a pinch of social science and a spoonful of speculation, topped with a soggy crust of prescription, and lightly dusted with premonitions of the apocalypse that will overcome us if we refuse a second serving. “Nexus” goes down easily, but it isn’t as nourishing as it claims. Much of it leaves a sour taste. Like the Victorian novel and Caesar’s Gaul, “Nexus” divides into three parts. The first part describes the development of complex societies through the creation and control of information networks. The second argues that the digital network is both quantitatively and qualitatively different from the print network that created modern democratic societies. The third presents the AI apocalypse. An “alien” information network gone rogue, Mr. Harari warns, could “supercharge existing human conflicts,” leading to an “AI arms race” and a digital Cold War, with rival powers divided by a Silicon Curtain of chips and code.

Information, Mr. Harari writes, creates a “social nexus” among its users. The “twin pillars” of society are bureaucracy, which creates power by centralizing information, and mythology, which creates power by controlling the dispersal of “stories” and “brands.” Societies cohere around stories such as the Bible and communism and “personality cults” and brands such as Jesus and Stalin. Religion is a fiction that stamps “superhuman legitimacy” on the social order. All “true believers” are delusional. Anyone who calls a religion “a true representation of reality” is “lying.” Mr. Harari is scathing about Judaism and Christianity but hardly criticizes Islam. In this much, he is not naive.

Mythologies of religion, history and ideology, Mr. Harari believes, exploit our naive tendency to mistake all information as “an attempt to represent reality.” When the attempt is convincing, the naive “call it truth.” Mr. Harari agrees that “truth is an accurate representation of reality” but argues that only “objective facts” such as scientific data are true. “Subjective facts” based on “beliefs and feelings” cannot be true. The collaborative cacophony of “intersubjective reality,” the darkling plain of social and political contention where all our minds meet, also cannot be fully true.

Digitizing our naivety has, Mr. Harari believes, made us uncontrollable and incorrigible. “Nexus” is most interesting, and most flawed, when it examines our current situation. Digital networks overwhelm us with information, but computers can only create “order,” not “truth” or “wisdom.” AI might take over without developing human-style consciousness: “Intelligence is enough.” The nexus of machine-learning, algorithmic “user engagement” and human nature could mean that “large-scale democracies may not survive the rise of computer technology.”

The “main split” in 20th-century information was between closed, pseudo-infallible “totalitarian” systems and open, self correcting “democratic” systems. As Mr. Harari’s third section describes, after the flood of digital information, the split will be between humans and machines. The machines will still be fallible. Will they allow us to correct them? Though “we aren’t sure” why the “democratic information network is breaking down,” Mr. Harari nevertheless argues that “social media algorithms” play such a “divisive” role that free speech has become a naive luxury, unaffordable in the age of AI. He “strongly disagrees” with Louis Brandeis’s opinion in Whitney v. California (1927) that the best way to combat false speech is with more speech.

The survival of democracy requires “regulatory institutions” that will “vet algorithms,” counter “conspiracy theories” and prevent the rise of “charismatic leaders.” Mr. Harari never mentions the First Amendment, but “Nexus” amounts to a sustained argument for its suppression. Unfortunately, his grasp of politics is tenuous and hyperbolic. He seems to believe that populism was invented with the iPhone rather than being a recurring bug that appears when democratic operating systems become corrupted or fail to update their software. He consistently confuses democracy (a method of gauging opinion with a long history) with liberalism (a mostly Anglo-American legal philosophy with a short history). He defines democracy as “an ongoing conversation between diverse information nodes,” but the openness of the conversation and the independence of its nodes derive from liberalism’s rights of individual privacy and speech. Yet “liberalism” appears nowhere in “Nexus.” Mr. Harari isn’t much concerned with liberty and justice either.

In “On Naive and Sentimental Poetry” (1795-96), Friedrich Schiller divided poetry between two modes. The naive mode is ancient and assumes that language is a window into reality. The sentimental mode belongs to our “artificial age” and sees language as a mirror to our inner turmoil. As a reflection of our troubled age of transition, “Nexus” is a mirror to the unease of our experts and elites. It divides people into the cognitively unfit and the informationally pure and proposes we divide power over speech accordingly. Call me naive, but Mr. Harari’s technocratic TED-talking is not the way to save democracy. It is the royal road to tyranny.

 

The Fear of Diverse Intelligences Like AI

I want to suggest that you read the article by Michael Levin in the Sept. 3 issue of Noema Magazine on how our fear of AI’s potential is emblematic of humanity’s larger difficulty recognizing intelligence in unfamiliar guises. (One needs to be clear however, that AI of the GTP engines is not 'intelligence' in the broader sense of the term. They are large language models, LLMs.) Here are some clips from the later portions of his essay:

Why would natural evolution have an eternal monopoly on producing systems with preferences, goals and the intelligence to strive to meet them? How do you know that bodies whose construction includes engineered, rational input in addition to emergent physics, instead of exclusively random mutations (the mainstream picture of evolution), do not have what you mean by emotion, intelligence and an inner perspective? 

Do cyborgs (at various percentage combinations of human brain and tech) have the magic that you have? Do single cells? Do we have a convincing, progress-generating story of why the chemical system of our cells, which is compatible with emotion, would be inaccessible to construction by other intelligences in comparison to the random meanderings of evolution?

We have somewhat of a handle on emergent complexity, but we have only begun to understand emergent cognition, which appears in places that are hard for us to accept. The inner life of partially (or wholly) engineered embodied action-perception agents is no more obvious (or limited) by looking at the algorithms that its engineers wrote than is our inner life derivable from the laws of chemistry that reductionists see when they zoom into our cells. The algorithmic picture of a “machine” is no more the whole story of engineered constructs, even simple ones, than are the laws of chemistry the whole story of human minds.

Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.

Our current educational materials give people the false idea that they understand the limits of what different types of matter can do.  The protagonist in the “Ex Machina” movie cuts himself to determine whether he is also a robotic being. Why does this matter so much to him? Because, like many people, if he were to find cogs and gears underneath his skin, he would suddenly feel lesser than, rather than considering the possibility that he embodied a leap-forward for non-organic matter.  He trusts the conventional story of what intelligently arranged cogs and gears cannot do (but randomly mutated, selected protein hardware can) so much that he’s willing to give up his personal experience as a real, majestic being with consciousness and agency in the world.

The correct conclusion from such a discovery — “Huh, cool, I guess cogs and gears can form true minds!” — is inaccessible to many because the reductive story of inorganic matter is so ingrained. People often assume that though they cannot articulate it, someone knows why consciousness inhabits brains and is nowhere else. Cognitive science must be more careful and honest when exporting to society a story of where the gaps in knowledge lie and which assumptions about the substrate and origin of minds are up for revision.

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned for us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want.

The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all these fears under macho facades of protecting real, honest-to-goodness humans and their relationships, but it’s transparent and it won’t hold.

Everything — not just technology, but also ethics — will change. Thus, my challenges to all of us are these. State your positive vision of the future — not just the ubiquitous lists of the fearful things you don’t want but specify what you do want. In 100 years, is humanity still burdened by disease, infirmity, the tyranny of deoxyribonucleic acid, and behavioral firmware developed for life on the savannah? What will a mature species’ mental frameworks look like?

“Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others.”

Clarify your beliefs: Make explicit the reasons for your certainties about what different architectures can and cannot do; include cyborgs and aliens in the classifications that drive your ethics. I especially call upon anyone who is writing, reviewing or commenting on work in this field to be explicit about your stance on the cognitive status of the chemical system we call a paramecium, the ethical position of life-machine hybrids such as cyborgs, the specific magic thing that makes up “life” (if there is any), and the scientific and ethical utility of the crisp categories you wish to preserve.

Take your organicist ideas more seriously and find out how they enrich the world beyond the superficial, contingent limits of the products of random evolution. If you really think there is something in living beings that goes beyond all machine metaphors, commit to this idea and investigate what other systems, beyond our evo-neuro-chauvinist assumptions, might also have this emergent cognition.

Consider that the beautiful, ineffable qualities of inner perspective and goal-directedness may manifest far more broadly than is easily recognized. Question your unwarranted confidence in what “mere matter” can do, and entertain the humility of emergent cognition, not just emergent complexity. Recognize the kinship we have with other minds and the fact that all learning requires your past self to be modified and replaced by an improved, new version. Rejoice in the opportunity for growth and change and take responsibility for guiding the nature of that change.

Go further — past the facile stories of what could go wrong in the future and paint the future you do want to work toward. Transcend scarcity and redistribution of limited resources, and help grow the pot. It’s not just for you — it’s for your children and for future generations, who deserve the right to live in a world unbounded by ancient, pre-scientific ideas and their stranglehold on our imaginations, abilities, and ethics.