Showing posts with label culture. Show all posts
Showing posts with label culture. Show all posts

Wednesday, May 15, 2024

Collective behavior from surprise minimization

A fascinating model for collective behavior from Heins et al.:

Significance

We introduce a model of collective behavior, proposing that individual members within a group, such as a school of fish or a flock of birds, act to minimize surprise. This active inference approach naturally generates well-known collective phenomena such as cohesion and directed movement without explicit behavioral rules. Our model reveals intricate relationships between individual beliefs and group properties, demonstrating that beliefs about uncertainty can shape collective decision-making accuracy. As agents update their generative model in real time, groups become more sensitive to external perturbations and more robust in encoding information. Our work provides fresh insights into understanding collective dynamics and could inspire strategies in the study of animal behavior, swarm robotics, and distributed systems.

Abstract

Collective motion is ubiquitous in nature; groups of animals, such as fish, birds, and ungulates appear to move as a whole, exhibiting a rich behavioral repertoire that ranges from directed movement to milling to disordered swarming. Typically, such macroscopic patterns arise from decentralized, local interactions among constituent components (e.g., individual fish in a school). Preeminent models of this process describe individuals as self-propelled particles, subject to self-generated motion and “social forces” such as short-range repulsion and long-range attraction or alignment. However, organisms are not particles; they are probabilistic decision-makers. Here, we introduce an approach to modeling collective behavior based on active inference. This cognitive framework casts behavior as the consequence of a single imperative: to minimize surprise. We demonstrate that many empirically observed collective phenomena, including cohesion, milling, and directed motion, emerge naturally when considering behavior as driven by active Bayesian inference—without explicitly building behavioral rules or goals into individual agents. Furthermore, we show that active inference can recover and generalize the classical notion of social forces as agents attempt to suppress prediction errors that conflict with their expectations. By exploring the parameter space of the belief-based model, we reveal nontrivial relationships between the individual beliefs and group properties like polarization and the tendency to visit different collective states. We also explore how individual beliefs about uncertainty determine collective decision-making accuracy. Finally, we show how agents can update their generative model over time, resulting in groups that are collectively more sensitive to external fluctuations and encode information more robustly.

Friday, March 01, 2024

The Hidden History of Debt

I pass on this link from the latest Human Bridges newsletter, and would encourage readers to subscribe to and support the Observatory's Human Bridges project, which is part of the Independent Media Institute:

Recent scientific findings and research in the study of human origins and our biology, paleoanthropology, and primate research have reached a key threshold: we are increasingly able to trace the outlines and fill in the blanks of our evolutionary story that began 7 million years ago to the present, and understand the social and cultural processes that produced the world we live in now.

Monday, February 26, 2024

The "enjunkification" of our online lives

I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article. 

Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”

The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.

...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.

The second article I mention is Jon Caramanica's:  "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:

The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.


 

 

Wednesday, February 21, 2024

AI makes our humanity matter more than ever.

I want to pass on this link to an NYTimes Opinion Guest essay by Aneesh Raman, a work force expert at LinkedIn,  and

Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”

The knowledge economy that we have lived in for decades emerged out of a goods economy that we lived in for millenniums, fueled by agriculture and manufacturing. Today the knowledge economy is giving way to a relationship economy, in which people skills and social abilities are going to become even more core to success than ever before. That possibility is not just cause for new thinking when it comes to work force training. It is also cause for greater imagination when it comes to what is possible for us as humans not simply as individuals and organizations but as a species.

Wednesday, February 14, 2024

How long has humanity been at war with itself?

I would like to point MindBlog readers to an article by Deborah Barsky with the title of this post. The following clip provides relevant links to the Human Bridges project of the Independent Media Institute. 

Deborah Barsky is a writing fellow for the Human Bridges project of the Independent Media Institute, a researcher at the Catalan Institute of Human Paleoecology and Social Evolution, and an associate professor at the Rovira i Virgili University in Tarragona, Spain, with the Open University of Catalonia (UOC). She is the author of Human Prehistory: Exploring the Past to Understand the Future (Cambridge University Press, 2022).

Friday, February 09, 2024

Bodily maps of musical sensations across cultures

Interesting work from Putkinen et al. (open source):  

Significance

Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.

Wednesday, February 07, 2024

Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment

I pass on to MindBlog readers the abstract of a recent Behavioral and Brain Science article by Sijilmassi et al. titled "‘Our Roots Run Deep’: Historical Myths as Culturally Evolved Technologies for Coalitional Recruitment."  Motivated readers can obtain a PDF of the article from me. 

One of the most remarkable manifestations of social cohesion in large-scale entities is the belief in a shared, distinct and ancestral past. Human communities around the world take pride in their ancestral roots, commemorate their long history of shared experiences, and celebrate the distinctiveness of their historical trajectory. Why do humans put so much effort into celebrating a long-gone past? Integrating insights from evolutionary psychology, social psychology, evolutionary anthropology, political science, cultural history and political economy, we show that the cultural success of historical myths is driven by a specific adaptive challenge for humans: the need to recruit coalitional support to engage in large scale collective action and prevail in conflicts. By showcasing a long history of cooperation and shared experiences, these myths serve as super-stimuli, activating specific features of social cognition and drawing attention to cues of fitness interdependence. In this account, historical myths can spread within a population without requiring group-level selection, as long as individuals have a vested interest in their propagation and strong psychological motivations to create them. Finally, this framework explains, not only the design-features of historical myths, but also important patterns in their cross-cultural prevalence, inter-individual distribution, and particular content.

Monday, January 01, 2024

On shifting perspectives....

I pass on clips from a piece in the 12/202/23 Wall Street Journal by Carlo Rovelli, the author, most recently, of ‘ White Holes: Inside the Horizon’

Somnium

By Johannes Kepler (1634)

1 Perhaps the greatest conceptual earthquake in the history of civilization was the Copernican Revolution. Prior to Copernicus, there were two realms: the celestial and the terrestrial. Celestial things orbit, terrestrial ones fall. The former are eternal, the latter perishable. Copernicus proposed a different organization of reality, in which the sun is in a class of its own. In another class are the planets, with the Earth being merely one among many. The moon is in yet another class, all by itself. Everything revolves around the sun, but the moon revolves around the Earth. This mad subversion of conventional reason was taken seriously only after Galileo and Kepler convinced humankind that Copernicus was indeed right. “Somnium” (“The Dream”) is the story of an Icelandic boy—Kepler’s alter ego—his witch mother and a daemon. The daemon takes the mother and son up to the moon to survey the universe, showing explicitly that what they usually see from Earth is the perspective from a moving body. Sheer genius.

History

By Elsa Morante (1974)

2 This passionate and intelligent novel is a fresco of Italy during World War II. “La Storia,” its title in Italian, can be translated as “story” or “tale” as well as “history.” Elsa Morante plumbs the complexity of humankind and its troubles, examining the sufferings caused by war. She writes from the view of the everyday people who bear the burden of the horror. This allows her to avoid taking sides and to see the humanity in both. The subtitle of this masterpiece—“a scandal that has lasted for ten thousand years”— captures Morante’s judgment of war, inviting us to a perspective shift on all wars.

Collected Poems of Lenore Kandel

By Lenore Kandel (2012)

3 Lenore Kandel was a wonderful and underrated poet who was part of the Beat-hippie movement in California. The tone of her poems varies widely, from bliss to desperation: “who finked on the angels / who stole the holy grail and hocked it for a jug of wine?” She created a scandal in the late 1960s by writing about sex in a strong, vivid way. Her profoundly anticonformist voice offers a radical shift of perspective by singing the beauty and the sacredness of female desire.

Why Empires Fall

By Peter Heather and John Rapley (2023)

4 As an Italian, I have long been intrigued by the fall of the Roman Empire. Peter Heather and John Rapley summarize the recent historiographic reassessments of the reasons for the fall. Their work also helps in understanding the present. Empires don’t necessarily collapse because they weaken. They fall because their success brings prosperity to a wider part of the world. They fall if they cannot adjust to the consequent rebalancing of power and if they try to stop history with the sheer power of weapons. “The easiest response to sell to home audiences still schooled in colonial history is confrontation,” the authors write. “This has major, potentially ruinous costs, compared to the more realistic but less immediately popular approach of accepting the inevitability of the periphery’s rise and trying to engage with it.”

The Mūlamadhyamakakārikā

By Nāgārjuna (ca. A.D. 150)

5 This major work of the ancient Indian Buddhist philosopher Nāgārjuna lives on in modern commentaries and translations. Among the best in English is Jay L. Garfield’s “The Fundamental Wisdom of the Middle Way” (1995). Nāgārjuna’s text was repeatedly recommended to me in relation to my work on the interpretation of quantum theory. I resisted, suspicious of facile and often silly juxtapositions between modern science and Eastern philosophy. Then I read it, and it blew my mind. It does indeed offer a possible philosophical underpinning to relational quantum mechanics, which I consider the best way to understand quantum phenomena. But it offers more: a dizzying and captivating philosophical perspective that renounces any foundation. According to this view, the only way to understand something is through its relation with something else—nothing by itself has an independent reality. In the language of Nāgārjuna, every thing, taken by itself, is “empty,” including emptiness itself. I find this a fascinating intellectual perspective as well as a source of serenity, with its acceptance of our limits and impermanence.

 

 

Friday, September 29, 2023

AI, a boon for science and a disaster for creatives

The Sept. 16 issue of the Economist has two excellent articles: How artificial intelligence can revolutionise science and How scientists are using artificial intelligence. I pass on here some edited clips from the first of these articles. I also want to point to much less benign commentary on how AI is moving toward threatening the livelihoods of creators of music, art, and literature: The Internet Is About to Get Much Worse.  

Could AI turbocharge scientific progress and lead to a golden age of discovery?

Some believe that AI can turbocharge scientific progress and lead to a golden age of discovery...Such claims provide a useful counterbalance to fears about large-scale unemployment and killer robots.
Many previous technologies have been falsely hailed as panaceas. The electric telegraph was lauded in the 1850s as a herald of world peace, as were aircraft in the 1900s; pundits in the 1990s said the internet would reduce inequality and eradicate nationalism...but there have been several periods in history when new approaches and new tools did indeed help bring about bursts of world-changing scientific discovery and innovation.
In the 17th century microscopes and telescopes opened up new vistas of discovery and encouraged researchers to favour their own observations over the received wisdom of antiquity, while the introduction of scientific journals gave them new ways to share and publicise their findings. The result was rapid progress in astronomy, physics and other fields, and new inventions from the pendulum clock to the steam engine—the prime mover of the Industrial Revolution.
Then, starting in the late 19th century, the establishment of research laboratories, which brought together ideas, people and materials on an industrial scale, gave rise to further innovations such as artificial fertiliser, pharmaceuticals and the transistor, the building block of the computer..the journal and the laboratory went further still: they altered scientific practice itself and unlocked more powerful means of making discoveries, by allowing people and ideas to mingle in new ways and on a larger scale. AI, too, has the potential to set off such a transformation.
Two areas in particular look promising. The first is “literature-based discovery” (LBD), which involves analysing existing scientific literature, using ChatGPT-style language analysis, to look for new hypotheses, connections or ideas that humans may have missed. LBD is showing promise in identifying new experiments to try—and even suggesting potential research collaborators.
The second area is “robot scientists”, also known as “self-driving labs”. These are robotic systems that use AI to form new hypotheses, based on analysis of existing data and literature, and then test those hypotheses by performing hundreds or thousands of experiments, in fields including systems biology and materials science. Unlike human scientists, robots are less attached to previous results, less driven by bias—and, crucially, easy to replicate.
In 1665, during a period of rapid scientific progress, Robert Hooke, an English polymath, described the advent of new scientific instruments such as the microscope and telescope as “the adding of artificial organs to the natural”. They let researchers explore previously inaccessible realms and discover things in new ways, “with prodigious benefit to all sorts of useful knowledge”. For Hooke’s modern-day successors, the adding of artificial intelligence to the scientific toolkit is poised to do the same in the coming years—with similarly world-changing results.

Friday, September 01, 2023

The fragility of artists’ reputations from 1795 to 2020

Zhang et al. do an interesting study using natural language processing to measure reputation over time:  

Significance

This study uses machine-learning techniques and a historical corpus to examine the evolution of artists’ reputations over time. Contrary to popular wisdom, we find that most artists’ reputations peak just before their death, and then start to decline. This decline is strongest for artists who were most popular during their lifetime. We show that artists’ reduced visibility and changes in the public’s aesthetic taste explain much of the posthumous reputation decline. This study highlights how social perception of historical figures can shift and emphasizes the vulnerability of human reputation. Methodologically, the study illustrates an application of natural language processing to measure reputation over time.
Abstract
This study explores the longevity of artistic reputation. We empirically examine whether artists are more- or less-venerated after their death. We construct a massive historical corpus spanning 1795 to 2020 and build separate word-embedding models for each five-year period to examine how the reputations of over 3,300 famous artists—including painters, architects, composers, musicians, and writers—evolve after their death. We find that most artists gain their highest reputation right before their death, after which it declines, losing nearly one SD every century. This posthumous decline applies to artists in all domains, includes those who died young or unexpectedly, and contradicts the popular view that artists’ reputations endure. Contrary to the Matthew effect, the reputational decline is the steepest for those who had the highest reputations while alive. Two mechanisms—artists’ reduced visibility and the public’s changing taste—are associated with much of the posthumous reputational decline. This study underscores the fragility of human reputation and shows how the collective memory of artists unfolds over time.

Friday, July 28, 2023

Unnarratability -The Tower of Babel redux - where have all the common narratives gone?

I pass on some clips from Venkatesh Rao's recent Ribbonfarm Studio posting.. Perspectives like his make me feel that one's most effective self preservation stance might be to assume that we are on the dawn of a new dark age, a period during which only power matters, and community, cooperation, and kindness are diminished - a period like the early middle ages in Europe which did permit under the sheltered circumstances of the church a privileged few to a life of contemplation.    

Strongly Narratable Conditions

The 1985-2015 period, arguably, was strongly narratable, and unsurprisingly witnessed the appearance of many strong global grand narratives. These mostly hewed to the logic of the there-is-no-alternative (TINA) platform narrative of neoliberalism, even when opposed to it...From Francis Fukuyama and Thomas Friedman in the early years, to Thomas Piketty, Yuval Noah Harari, and David Graeber in the final years, many could, and did, peddle coherent (if not always compelling) Big Histories. Narrative performance venues like TED flourished. The TINA platform narrative supplied the worldwinds for all narratives.
Weakly Narratable Conditions
The 2007-2020 period, arguably, was such a period (the long overlap of 8 years, 2007-15, was a period with uneven weak/strong narratability). In such conditions, a situation is messed-up and contentious, but in a way that lends itself to the flourishing of a pluralist, polycentric narrative landscape, where there are multiple contending accounts of a shared situation, Rashomon style, but the situation is merely ambiguous, not incoherent.
While weakly narratable conditions lack platform narratives, you could argue that there is something of a prevailing narrative protocol during weakly narratable times - an emergent lawful pattern of narrative conflict that cannot be codified into a legible set of consensus rules of narrative engagement, but produces slow noisy progress anyway, does not devolve into confused chaos, and sustains a learnable narrative literacy.
This is what it meant to be “very online” in 2007-20. It meant you had acquired a certain literacy around the prevailing narrative protocol. Perhaps nobody could make sense of what was going on overall, beyond their private, solipsistic account of events, and it was perhaps not possible to play to win, but there was enough coherence in the situation that you could at least play to not lose.
Unnarratable Conditions
The pandemic hit, and we got to what I think of as unnarratable conditions...While the specific story of the pandemic itself was narratable, the story of the wider post-Weirding world, thrown into tumult by the pandemic, was essentially unnarratable.
Unnarratable times are fundamentally incoherent melees of contending historical forces. Times when there isn’t even a narrative protocol you can acquire a reliable literacy in, let alone a platform narrative upon which to rest your sense-making efforts. Where the environmental entropy is so high, people struggle to put together any kind of narrative, even solipsistic private ones that harbor no ambitions of inDuencing others. There is no privileged class (comparable to the “Very Online” before 2020) that can plausibly claim a greater narrative literacy than other classes.
Those who claim to possess satisfying grand narratives are barely able to persuade even close allies to share it, let alone induce narrative protocols through them, or install them as platform narratives. The result: a collective retreat to a warren of cozy cultural redoubts, usually governed by comforting reactionary or nostalgic local narratives, and a derelict public discourse.
We have been in such a condition at least since 2022, and arguably since 2020. If you set aside the narrow liminal story of the pandemic, the world has been nearly unnarratable for years now.

Friday, May 12, 2023

Virality

This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the  seventh section of her book, titled "Virality"

The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our  human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.

The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.

When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
            
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.

The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.

Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”

If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.

If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.”  She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”


Friday, May 05, 2023

The Data Deluge - Dataism

This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the  sixth section of her book, titled "Algorithm."

Chapter 11  

In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.

What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.

These mathematical tools can predict and understand the world more adequately than any theory could.  Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...

Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”

Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
            
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.

Chapter 12

Here are a smaller number of clips from the last section of Chapter 12,  on the errors of algorithms.   

It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.

It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.

For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
            
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
            
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.

This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.

 

Monday, May 01, 2023

Panpsychism and Metonymy

This post is the seventh installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 9 and 10 from the  fifth section of her book,  titled "Metonymy"

Chapter 9

Panpsychism has surfaced from time to time over the centuries, as in the philosophy of Bertrand Russell and Arthur Eddington, who realized that the two most notable “gaps” in physicalism—the problem of consciousness and the “problem of intrinsic natures” (the question of what matter is)—could be solved in one fell swoop. Physics could not tell us what matter was made out of, and nobody could understand what consciousness was, so maybe consciousness was, in fact, the fundamental nature of all matter. Mental states were the intrinsic nature of physical states…The impasse surrounding the hard problem of consciousness and the weirdness of the quantum world has created a new openness to the notion that the mind should have never been excluded from the physical sciences in the first place.

Some neuroscientists have arrived at panpsychism not through philosophy but via information theory. One of the leading contemporary theories of consciousness is integrated information theory, or IIT. Pioneered by Giulio Tononi and Christof Koch…IIT holds that consciousness is bound up with the way that information is “integrated” in the brain. Information is considered integrated when it cannot be easily localized but instead relies on highly complex connections across different regions of the brain…They have come up with a specific number, Φ, or phi, which they believe is a threshold and is designed to measure the interdependence of different parts of a system…many other creatures have a nonzero level of phi, which means that they too are conscious—as are atoms, quarks, and some single-celled organisms…Unlike emergentism and other systems theories that cleverly redefine terms like “consciousness” and “cognition” so that they apply to forests and “insect colonies, panpsychists believe that these entities truly possess some kind of phenomenal experience—that it feels like something to be a mouse, an amoeba, or a quark…Although the theory is still a minority position within academia, there is undoubtedly more openness today to theories that upturn modern orthodoxies to extend consciousness down the chain of being.  

“While popular debates about the theory rarely extend beyond the plausibility of granting consciousness to bees and trees, it contains far more radical implications. To claim that reality itself is mental is to acknowledge that there exists no clear boundary between the subjective mind and the objective world. When Bacon denounced our tendency to project inner longings onto scientific theories, he took it for granted—as most of us do today—that the mind is not part of the physical world, that meaning is an immaterial idea that does not belong to objective reality. But if consciousness is the ultimate substrate of everything, these distinctions become blurred, if not totally irrelevant. It’s possible that there exists a symmetry between our interior lives and the world at large, that the relationship between them is not one of paradox but of metonymy—the mind serving as a microcosm of the world’s macroscopic consciousness. Perhaps it is not even a terrible leap to wonder whether the universe can communicate with us, whether life is full of “correspondences,” as the spiritualists called them, between ourselves and the transcendent realm—whether, to quote Emerson, “the whole of nature is a metaphor of the human mind.

Although integrated information theory is rooted in longstanding analogies between the brain and digital technologies, it remains uncertain whether the framework allows for machine consciousness. Koch argues that nothing in ITT necessitates that consciousness is unique to organic forms of life… So long as a system meets the minimum requirements of integrated information, it could in principle become conscious, regardless of whether it’s made of silicon or brain tissue. However, most digital computers have sparse and fragmented connectivity that doesn’t allow for a high level of integration.  

One of the central problems in panpsychism is the “combination problem.” This is the challenge of explaining how conscious microsystems give way to larger systems of unified consciousness. If neurons are conscious—and according to Koch they have enough phi for “an itsy-bitsy amount of experience”—and my brain is made of billions of neurons, then why do I have only one mind and not billions? Koch’s answer is that a system can be conscious only so long as it does not contain and is not contained within something with a higher level of integration. While individual neurons cultured in a petri dish might be conscious, the neurons in an actual brain are not, because they are subsumed within a more highly integrated system...This is why humans are conscious while society as a whole is not. Although society is the larger conglomerate, it is less integrated than the human brain, which is why humans do not become swallowed up in the collective consciousness the way that neurons do.

It is, however, undeniable that society is becoming more and more integrated. Goff pointed out recently that if IIT is correct, then social connectivity is a serious existential threat. Assuming that the internet reaches a point where its information is more highly integrated than that of the human brain, it would become conscious, while all our individual human brains would become absorbed into the collective mind. “Brains would cease to be conscious in their own right,” Goff writes, “and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.” Goff likens this scenario to the visions of Pierre Teilhard de Chardin, the French Jesuit priest who, as we’ve seen, prophesied the coming Omega Point and inspired aspects of transhumanism. Once humanity is sufficiently connected via our information technologies, Teilhard predicted, we will all fuse into a single universal mind—the noosphere—enacting the Kingdom of Heaven that Christ promised.
           
This is already happening, of course, at a pace that is largely imperceptible - in the speed with which ideas go viral, cascading across social platforms, such that the users who share them begin to seem less like agents than as hosts, nodes in the enormous brain…in the efficiency of consensus, the speed with which opinions fuse and solidify alongside the news cycle, like thought coalescing in the collective consciousness. We have terms that attempt to catalogue this merging—the “hive mind,” “groupthink” -  times when I become aware of my own blurred boundaries, seized by the suspicion that I am not forming new opinions so much as assimilating them…I don’t know what to call this state of affairs, but it does not feel like the Kingdom of God.


Chapter 10

From the end of the chapter:

 “Idealism and panpsychism are appealing in that they offer a way of believing once again in the mind—not as an illusion or an epiphenomenon but as a feature of our world that is as real as anything else. But its proponents rarely stop there. In some cases they go on to make the larger claim that there must therefore exist some essential symmetry between the mind and the world, that the patterns we observe in our interior lives correspond to a more expansive, transcendent truth. Proponents of these theories occasionally appeal to quantum physics to argue that the mind-matter dichotomy is false—clearly there exists some mysterious relationship between the two. But one could just as easily argue that physics has, on the contrary, affirmed this chasm, demonstrating that the world at its most fundamental level is radically other than ourselves—that the universe is, as Erwin Schrödinger put it, “not even thinkable.”

This is precisely the modern tension that Arendt calls attention to in The Human Condition. On the one hand, the appearance of order in the world—the elegance of physical laws, the usefulness of mathematics—tempts us to believe that our mind is made is made in its image, that “the same patterns rule the macrocosm and the microcosm alike.” In the enchanted world order was seen as proof of eternal unity, evidence that God was present in all things, but for the modern person this symmetry leads inevitably back to Cartesian doubt—the suspicion that the order perceived stems from some mental deception. We have good reason to entertain such suspicions, Arendt argues. Since Copernicus and Galileo, science has overturned the most basic assumptions about reality and suggested that our sensory perception is unreliable. This conclusion became unavoidable with the discovery of general relativity and quantum physics, which suggest that “causality, necessity, and lawfulness are categories inherent in the human brain and applicable only to the common-sense experiences of earthbound creatures.” We keep trying to reclaim the Archimedean point, hoping that science will allow us to transcend the prison of our perception and see the world objectively. But the world that science reveals is so alien and bizarre that whenever we try to look beyond our human vantage point, we are confronted with our own reflection. “It is really as though we were in the hands of an evil spirit,” Arendt writes, alluding to Descartes’s thought experiment, “who mocks us and frustrates our thirst for knowledge, so that whenever we search for that which we are not, we encounter only the “patterns of our own minds.”
           
That is not to say that the Archimedean point is no longer possible.  In her 1963 essay “The Conquest of Space and the Stature of Man,” Arendt considers this modern problem in light of emerging technologies. The oddest thing, she notes, is that even though our theories about the world are limited and simplistic and probably wrong, they “work” when implemented into technologies. Despite the fact that nobody understands what quantum mechanics is telling us about the world, the entire computer “age—including every semiconductor, starting with the very first transistor, built in 1947—has rested on well-modeled quantum behavior and reliable quantum equations. The problem is not merely that we cannot understand the world but that we can now build this alien logic into our devices. There are some scientists, Arendt notes, who claim that computers can do “what a human brain cannot comprehend.” Her italics are instructive: it’s not merely that computers can transcend us in sheer brain power—solving theorems faster than we can, finding solutions more efficiently—but that they can actually understand the world in a way that we cannot. She found this proposition especially alarming. “If it should be true…that we are surrounded by machines whose doings we cannot comprehend although we have devised and constructed them,” she writes, “it would mean that the theoretical perplexities of the natural sciences on the highest level have invaded our everyday world.” This conclusion was remarkably prescient.”


 

Friday, April 28, 2023

Are we living in a simulated world?

This post is the sixth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 8 from the  fourth section of her book,  titled "Paradox."

Bostrom, a prominent transhumanist, believes that humanity is in the process of becoming posthuman as we merge our bodies with technology. We are becoming superintelligence ourselves. His simulation hypothesis begins by imagining a future, many generations from now, when posthumans have achieved an almost godlike mastery over the world. One of the things these posthumans might do, Bostrom proposes, is create simulations—digital environments that contain entire worlds…The inhabitants will not know that they are living in a simulation but will believe their world is all that exists…the theory’s popularity has escalated over the past decade or so. It has gained an especially fervent following among scientists and Silicon Valley luminaries, including Neil deGrasse Tyson and Elon Musk, who have come out as proponents…It has become, in other words, the twenty-first century’s favored variation on Descartes’s skeptical thought experiment—the proposition that our minds are lying to us, that the world is radically other than it seems.

…for all its supposed “naturalism,” the simulation hypothesis is ultimately an argument from design. It belongs to a long lineage of creationist rhetoric that invoke human technologies to argue that the universe could not have come about without the conscious intention of a designer.,.Bostrom acknowledged in his paper that there were “some loose analogies” that could be drawn between the simulation hypothesis and traditional religious concepts. The programmers who created the simulation would be like gods compared to those of us within the simulation.

One of the common objections to the informational universe is that information cannot be “ungrounded,” without a material instantiation. Claude Shannon, the father of information theory, insisted that information had to exist in some kind of physical medium, like computer hardware…if the universe were an enormous computer, then this information would in fact be instantiated on something material, akin to a hard drive. We wouldn’t be able to see or detect it because it would exist in the universe of the programmers who built it. All we would notice was its higher-level structure, the abstract patterns and laws that were part of its software. The simulation hypothesis, in other words, could explain why our universe is imbued with discernible patterns and mathematical regularities while also explaining how those patterns could be rooted in something more than mere abstractions. Perhaps Galileo was not so far off when he imagined the universe as a book written by God in the language of mathematics. The universe was software written by programmers in the binary language of code…“if you take this thesis to its conclusion, it doesn’t really explain anything about the universe or its origins. Presumably there is still some original basement-level reality at its foundation—there could be no true infinite regress—occupied by first posthumans who created the very first technological simulation. But these posthumans were just our descendants—or the descendants of some other species that had evolved on another planet—and so the question about origins remained unchanged, only pushed back one degree. Where did the universe originally come from?

Bohr …observed that humans are incapable of understanding the world beyond “our necessarily prejudiced conceptual frame.” And perhaps it can explain why the multiverse theory and other attempts to transcend our anthropocentric outlook so seem a form of bad faith, guilty of the very hubris they claim to reject. There is no Archimedean point, no purely objective vista that allows us to transcend our human interests and see the world from above, as we once imagined it appeared to God. It is our distinctive vantage that binds us to the world and sets the necessary limitations that are required to make sense of it. This is true, of course, regardless of which interpretation of physics is ultimately correct. It was Max Planck, the physicist who struggled more than any other pioneer of quantum theory to accept the loss of a purely objective worldview, who acknowledged that the central problems of physics have always been reflexive. “Science cannot solve the ultimate mystery of nature,” he wrote in 1932. “And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.

 

Wednesday, April 26, 2023

Is the mind a reliable mirror of reality? The marriage of physics and information theory

 This post is the fifth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 7 from the  fourth section of her book,  titled "Paradox."

Is the mind a reliable mirror of reality? Do the patterns we perceive belong to the objective world, or are they merely features of our subjective experience? Given that physics was founded on the separation of mind and matter, subject and object, it’s unsurprising that two irreconcilable positions that attempt to answer this question have emerged: one that favors subjectivity, the other objectivity. Bohr’s view was that quantum physics describes our subjective experience of the world; it can tell us only about what we observe. Mathematical equations like the wave function are merely metaphors that translate this bizarre world into the language of our perceptual interface—or, to borrow Kant’s analogy, spectacles that allow us to see the chaotic world in a way that makes sense to our human minds. Other interpretations of physics, like the multiverse theory or string theory, regard physics not as a language we invented but as a description of the real, objective world that exists out there, independent of us. Proponents of this view tend to view equations and physical laws as similarly transcendent, corresponding to literal, or perhaps even Platonic, realities.

The marriage of physics and information theory is often attributed to John Wheeler, the theoretical physicist who pioneered, with Bohr, the basic principles of nuclear fission. In the late 1980s, Wheeler realized that the quantum world behaved a lot like computer code. An electron collapsed into either a particle or a wave depending on how we interrogated it. This was not dissimilar from the way all messages can be simplified into “binary units,” or bits, which are represented by zeros and ones. Claude Shannon, the father of information theory, had defined information as “the resolution of uncertainty,” which seemed to mirror the way quantum systems existed as probabilities that collapsed into one of two states. For Wheeler these two fields were not merely analogous but ontologically identical. In 1989 he declared that “all things physical are information-theoretic in origin.            
            
In a way Wheeler was exploiting a rarely acknowledged problem that lies at the heart of physics: it’s uncertain what matter actually is. Materialism, it is often said, is not merely an ontology but a metaphysics—an attempt to describe the true nature of things. What materialism says about our world is that matter is all that exists: everything is made of it, and nothing exists outside of it. And yet, ask a physicist to describe an electron or a quark, and he will speak only of its properties, its position, its behavior—never its essence.

Wheeler’s answer was that matter itself does not exist. It is an illusion that arises from the mathematical structures that undergird everything, a cosmic form of information processing. Each time we make a measurement we are creating new information—we are, in a sense, creating reality itself. Wheeler called this the “participatory universe,” a term that is often misunderstood as having mystical “connotations, as though the mind has some kind of spooky ability to generate objects. But Wheeler did not even believe that consciousness existed. For him, the mind itself was nothing but information. When we interacted with the world, the code of our minds manipulated the code of the universe, so to speak. It was a purely quantitative process, the same sort of mathematical exchange that might take place between two machines.            

While this theory explains, or attempts to explain, how the mind is able to interact with matter, it is a somewhat evasive solution to the mind-body problem, a sleight of hand that discards the original dichotomy by positing a third substance—information—that can explain both. It is difficult, in fact, to do justice to how entangled and self-referential these two fields—information theory and physics—have become, especially when one considers their history. The reason that cybernetics privileged relationships over content in the first place was so that it could explain things like consciousness purely in terms of classical physics, which is limited to describing behavior but not essence—“doing” but not “being.” When Wheeler merged information theory with quantum physics, he was essentially closing the circle, proposing that the hole in the material worldview—intrinsic essence—could be explained by information itself.

Seth Lloyd, an MIT professor who specializes in quantum information, insists that the universe is not like a computer but is in fact a computer. “The universe is a physical system that contains and processes information in a systematic fashion,” he argues, “and that can do everything a computer can do.” Proponents of this view often point out that recent observational data seems to confirm it. Space-time, it turns out, is not smooth and continuous, as Einstein’s general relativity theory assumed, but more like a grid made up of minuscule bits—tiny grains of information that are not unlike the pixels of an enormous screen. Although we experience the world in three dimensions, it seems increasingly likely that all the information in the universe arises from a two-dimensional field, much like the way holograms work, or 3-D films.
            
When I say that I try very hard to avoid the speculative fringe of physics, this is more or less what I am talking about. The problem, though, is that once you’ve encountered these theories it is difficult to forget them, and the slightest provocation can pull you back in. It happened a couple years ago, while watching my teenage cousin play video games at a family gathering. I was relaxed and a little bored and began thinking about the landscape of the game, the trees and the mountains that made up the backdrop. The first-person perspective makes it seem like you’re immersed in a world that is holistic and complete, a landscape that extends far beyond the frame, though in truth each object is generated as needed. Move to the right and a tree is “generated; move to the left and a bridge appears, creating the illusion that it was there all along. What happened to these trees and rocks and mountains when the player wasn’t looking? They disappeared—or no, they were never there to begin with; they were just a line of code. Wasn’t this essentially how the observer effect worked? The world remained in limbo, a potentiality, until the observer appeared and it was compelled to generate something solid. Rizwan Virk, a video game programmer, notes that a core mantra in programming is “only render that which is being observed.”
            
Couldn’t the whole canon of quantum weirdness be explained by this logic? Software programs are never perfect. Programmers cut corners for efficiency—they are working, after all, with finite computing power; even the most detailed systems contain areas that are fuzzy, not fully sketched out. Maybe quantum indeterminacy simply reveals that we’ve reached the limits of the interface. The philosopher Slavoj Žižek once made a joke to this effect. Perhaps, he mused, God got a little lazy when he was creating the universe, like the video game programmer who doesn’t bother to meticulously work out the interior of a house that[ “the player is not meant to enter. “He stopped at a subatomic level,” he said, “because he thought humans would be too stupid to progress so far.”

Monday, April 24, 2023

Networks and Emergentism

This post is the fourth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 5 and 6 form the third section of her book,  titled "Network."

From Chapter 5:

When it comes to biological systems like forests and swarms, emergent behavior that appears to be unified and intelligent can exist without a centralized control system like a brain. But the theory has also been applied to the brain itself, as a way to account for human consciousness. Although most people tend to think of the brain as the body’s central processing unit, the organ itself has no central control. Philosophers and neuroscientists often point out that our belief in a unified interior self—the illusion, as Richard Dawkins once put it, that we are “a unit, not a colony”—has no basis in the actual architecture of the brain. Instead there are only millions of unconscious parts that conspire, much like a bee colony, to create a “system” that is intelligent. Emergentism often entails that consciousness isn’t just in the head; it emerges from the complex relationships that exist throughout the body, and also from the interactions between the body and its environment.

Although emergentism is rooted in physicalism, critics have often claimed that there is something inherently mystical about the theory, particularly when these higher-level patterns are said to be capable of controlling or directing physical processes...few emergentists have managed to articulate precisely what kind of structure might produce consciousness in machines; in some cases the mind is posited simply as a property of “complexity,” a term that is eminently vague. Some critics have argued that emergentism is just an updated version of vitalism—the ancient notion that the world is animated by a life force or energy that permeates all things…Although emergentism is focused specifically on consciousness, as opposed to life itself, the theory is vulnerable to the same criticism that has long haunted vitalism: it is an attempt to get “something from nothing.” It hypothesizes some additional, invisible power that exists within the mechanism, like a ghost in the machine.

…emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity. At some point nature will step in and finish the job…aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself.

From Chapter 6:

…once the world was a sacred and holy place, full of chatty and companionable objects—rocks and trees that were capable of communicating with us—we now live in a world that has been rendered mute… some disenchantment narratives place the fall from grace not with the Enlightenment and the rise of modern science but with the emergence of monotheism. The very notion of imago dei, with humanity created in God’s image and given “dominion” over creation, has linked human exceptionalism with the degradation of the natural world.  Is it possible to go back? Or are these narratives embedded so deeply in the DNA of our ontological assumptions that a return is impossible? This is especially difficult when it comes to our efforts to create life from ordinary matter…In the orthodox forms of Judaism and Christianity, the ability to summon life from inert matter is denounced as paganism, witchcraft, or idolatry.

Just as the golems were sculpted out of mud and animated with a magical incantation, so the hope persists that robots built from material parts will become inhabited by that divine breath…While these mystical overtones should not discredit emergence as such—it is a useful enough way to describe complex systems like beehives and climates—the notion that consciousness can emerge from machines does seem to be a form of wishful thinking, if only because digital technologies were built on the assumption that consciousness played no role in the process of intelligence. Just as it is somewhat fanciful to believe that science can explain consciousness when modern science itself was founded on the exclusion of the mind, it is difficult to believe that technologies designed specifically to elide the notion of the conscious subject could possibly come to develop an interior life.
           
To dismiss emergentism as sheer magic is to ignore the specific ways in which it differs from the folklore of the golems—even as it superficially satisfies the same desire. Scratch beneath the mystical surface and it becomes clear that emergentism is often not so different from the most reductive forms of materialism, particularly when it comes to the question of human consciousness. Plant intelligence has been called a form of “mindless mastery,” and most emergentists view humans as similarly mindless. We are not rational agents but an encasement of competing systems that lack any sort of unity or agency. Minsky once described the mind as “a sort of tangled-up bureaucracy” whose parts remain ignorant of one another.

Just as the intelligence of a beehive or a traffic jam resides in the patterns of these inert, intersecting parts, so human consciousness is merely the abstract relationships that emerge out of these systems: once you get to the lowest level of intelligence, you inevitably find, as Minsky put it, agents that “cannot think at all.” There is no place in this model for what we typically think of as interior experience, or the self.

Embodied artificial intelligence is being pursued in laboratories using humanoid robots equipped with sensors and cameras that endow the robots with sensory functions and motor skills. The theory is that these sensorimotor capacities will eventually lead to more advanced cognitive skills, such as a sense of self or the ability to use language, though so far this has not happened.
 

Friday, April 21, 2023

Equivalence of the metaphors of the major religions and transhumanism

This post is the third installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 3 and 4 form the second section of her book,  titled "Pattern."

From Chapter 3:

Once animal brains began to form, the information became encoded in neural patterns. Now that evolution has produced intelligent, tool-wielding humans, we are designing new information technologies more sophisticated than any object the world has yet seen. These technologies are becoming more complex and powerful each year, and very soon they will transcend us in intelligence. The ‘transhumanist’  movement believes that the only way for us to survive as humans is to begin merging our bodies with these technologies, transforming ourselves into a new species—what Kurzweil calls “posthumans,” or spiritual machines. Neural implants, mind-uploading, and nanotechnology will soon be available, he promises. With the help of these technologies, we will be able to transfer or “resurrect” our minds onto supercomputers, allowing us to become immortal. Our bodies will become incorruptible, immune to disease and decay, and each person will be able to choose a new, customizable virtual physique.

From Chapter 4:

…how is it that the computer metaphor—an analogy that was expressly designed to avoid the notion of a metaphysical soul - has returned to us ancient religious ideas about physical transcendence and the disembodied spirit?

In his book “You Are Not a Gadget”, the computer scientist Jaron Lanier argues that just as the Christian belief in an immanent Rapture often conditions disciples to accept certain ongoing realities on earth—persuading them to tolerate wars, environmental destruction, and social inequality—so too has the promise of a coming Singularity served to justify a technological culture that privileges information over human beings. “If you want to make the transition from the old religion, where you hope God will give you an afterlife,” Lanier writes, “to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive.” This sacralizing of information is evident in the growing number of social media platforms that view their human users as nothing more than receptacles of data. It is evident in the growing obsession with standardized testing in public schools, which is designed to make students look good to an algorithm. It is manifest in the emergence of crowd-sourced sites such as Wikipedia, in which individual human authorship is obscured so as to endow the content with the transcendent aura of a holy text. In the end, transhumanism and other techno-utopian ideas have served to advance what Lanier calls an “antihuman approach to computation,” a digital climate in which “bits are presented as if they were alive, while humans are transient fragments.

In a way we are already living the dualistic existence that Kurzweil promised. In addition to our physical bodies, there exists—somewhere in the ether—a second self that is purely informational and immaterial, a data set of our clicks, purchases, and likes that lingers not in some transcendent nirvana but rather in the shadowy dossiers “of third-party aggregators. These second selves are entirely without agency or consciousness; they have no preferences, no desires, no hopes or spiritual impulses, and yet in the purely informational sphere of big data, it is they, not we, that are most valuable and real.

He too found an “essential equivalence” between transhumanist metaphors and Christian metaphors: both systems of thought placed a premium value on consciousness. The nature of consciousness—as well as the question of who and what is conscious—is the fundamental philosophical question, he said, but it’s a question that cannot be answered by science alone. This is why we need metaphors.  “religion deals with legitimate questions but the major religions emerged in pre-scientific times so that the metaphors are pre-scientific. That the answers to existential questions are necessarily metaphoric is necessitated by the fact that we have to transcend mere matter and energy to find answers…The difference between so-called atheists and people who believe in “God” is a matter of the choice of metaphor, and we could not get through our lives without having to choose metaphors for transcendent questions.
           
Perhaps all these efforts—from the early Christians’ to the medieval alchemists’ to those of the luminaries of Silicon Valley—amounted to a singular historical quest, one that was expressed through analogies that were native to each era. Perhaps our limited vantage as humans meant that all we could hope for were metaphors of our own making, that we would continually grasp at the shadow of absolute truths without any hope of attainment.
 

Wednesday, April 19, 2023

The Illusion of the Self as Humans become Gods.

This post continues my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting.  This post deals with Chapter 2 from the first section of the book, 'Image.'  I’m discontinuing the experiment of including Chat GPT 4 condensations of the excerpts. Here are the clips:

It turns out that computers are particularly adept at the tasks that we humans find most difficult: crunching equations, solving logical propositions, and other modes of  abstract thought. What artificial intelligence finds most difficult are the sensory perceptive tasks and motor skills that we perform unconsciously: walking, drinking from a cup, seeing and feeling the world through our senses. Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.
            
If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought.”

“Metaphors are typically abandoned once they are proven to be insufficient. But in some cases, they become only more entrenched: the limits of the comparison come to redefine the concepts themselves. This latter tactic has been taken up by the eliminativists, philosophers who claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent”

The eliminativist philosophers claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent. Chief among the deniers is Daniel Dennett, who has often insisted that the mind is illusory. Dennett refers to the belief in interior experience derisively as the “Cartesian theater,” invoking the popular delusion—again, Descartes’s fault—that there exists in the brain some miniature perceiving entity, a homunculus that is watching the brain’s representations of the external world projected onto a movie screen and making decisions about future actions. One can see the problem with this analogy without any appeals to neurobiology: if there is a homunculus in my brain, then it must itself (if it is able to perceive) contain a still smaller homunculus in its head, and so on, in infinite regress.
            
            Dennett argues that the mind is just the brain and the brain is nothing but computation, unconscious all the way down. What we experience as introspection is merely an illusion, a made-up story that causes us to think we have “privileged access” to our thinking processes. But this illusion has no real connection to the mechanics of thought, and no ability to direct or control it. Some proponents of this view are so intent on avoiding the sloppy language of folk psychology that any any reference to human emotions and intentions is routinely put in scare quotes. We can speak of brains as “thinking,” “perceiving,” or “understanding” so long as it’s clear that these are metaphors for the mechanical processes. “The idea that, in addition to all of those, there is this extra special something—subjectivity—that distinguishes us from the zombie,” Dennett writes, “that’s an illusion.”

Perhaps it’s true that consciousness does not really exist—that, as Brooks put it, we “overanthropomorphize humans.” If I am capable of attributing life to all kinds of inanimate objects, then can’t I do the same to myself? In light of these theories, what does it mean to speak of one’s “self” at all?


Monday, April 17, 2023

Disenchantment of the world and the computational metaphors of our times.

I am doing a second reading of Meghan O’Gieblyn’s book “Gods, Humans, Animals, Machines” and extracting clips of text that I find most interesting.  I’m putting them in a MindBlog post, hoping they will be interesting to some readers, and also because MindBlog is my personal archive of things I want to remember I've engaged. At least I will know where to look up something I'm trying to recall.

 O’Gieblyn’s text has bursts of magisterial insight interspersed with details of her personal experiences and travails, and the clips try to capture my biased selection of the former. 

The first section of her book “Image”, has two Chapters, and this post passes on Chapter 1,  starting with the result of my asking ChatGPT 4 to summarize my clips in approximately 1000 words. It generated the ~300 words below, and I would urge you to continue reading my clips (976 words), which provide a more rich account. Subsequent posts in this series will omit Chat GPT summaries, unless they generate something that blows me away. 

Here is Chat GPT 4’s summary:

The concept of the soul has become meaningless in modern times, reduced to a dead metaphor that no longer holds any real significance. This is due to the process of disenchantment that has taken place since the dawn of modern science, which has turned the world into a subject of investigation and reduced everything to the causal mechanism of physical laws. This has led to a world that is devoid of the spirit-force that once infused and unified all living things, leaving us with an empty carapace of gears and levers. However, the questions that were once addressed by theologians and philosophers persist in conversations about digital technologies, where artificial intelligence and information technologies have absorbed them.  

Humans have a tendency to see themselves in all beings, as evidenced by our habit of attributing human-like qualities to inanimate objects. This has led to the development of the idea of God and the belief that natural events are signs of human agency. This impulse to see human intention and purpose in everything has resulted in a projection of our image onto the divine, which suggests that metaphors are two-way streets and that it is not always easy to distinguish between the source domain and the target.

The development of cybernetics and the application of the computational analogy to the mind has resulted in the description of the brain as the hardware that runs the software of the mind, with cognitive systems being spoken of as algorithms. However, the use of metaphors like these can lead to a limiting of our understanding of the world and how we interact with it. As cognitive linguist George Lakoff notes, when an analogy becomes ubiquitous, it can be difficult to think around it, and it structures how we think about the world.

And here are the text clips I asked ChatGPT 4 to summarize

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons

We live in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden.” In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”
            
If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden.

To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

Animism was built into our design. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. Our brains can’t fundamentally distinguish between interacting with people and interacting with devices. Our habit of seeing our image everywhere in the natural world is what gave birth to the idea of God. Early civilizations assumed that natural events bore signs of human agency. Earthquakes happened because the gods were angry. Famine and drought were evidence that the gods were punishing them. Because human communication is symbolic, humans were quick to regard the world as a system of signs, as though some higher being were seeking to communicate through “natural events. Even the suspicion that the world is ordered, or designed, speaks to this larger impulse to see human intention and human purpose in every last quirk of “creation.”
    
There is evidently no end to our solipsism. So deep is our self-regard that we projected our image onto the blank vault of heaven and called it divine. But this theory, if true, suggests a deeper truth: metaphors are two-way streets. It is not so easy to distinguish the source domain from the target, to remember which object is the original and which is modeled after its likeness. The logic can flow in either direction. For centuries we said we were made in God’s image, when in truth we made him in ours.”

Shannon removed the thinking mind from the concept of information. Meanwhile, McCulloch applied the logic of information processing to the mind itself. This resulted in a model of mind in which thought could be accounted for in purely abstract, mathematical terms, and opened up the possibility that computers could execute mental functions. If thinking was just information processing, computers could be said to “learn,” “reason,” and “understand”—words that were, at least in the beginning, put in quotation marks to denote them as metaphors. But as cybernetics evolved and the computational analogy was applied across a more expansive variety of biological and artificial systems, the limits of the metaphor began to dissolve, such that it became increasingly difficult to tell the difference between matter and form, medium and message, metaphor and reality. And it became especially difficult to explain aspects of the mind that could not be accounted for by the metaphor.

The brain is often described today as the hardware that “runs” the software of the mind. Cognitive systems are spoken of as algorithms: vision is an algorithm, and so are attention, language acquisition, and memory.
            
In 1999 the cognitive linguist George Lakoff noted that the analogy had become such a given that neuroscientists “commonly use the Neural Computation metaphor without noticing that it is a metaphor.” He found this concerning. Metaphors, after all, are not merely linguistic tools; they structure how we think about the world, and when “an analogy becomes ubiquitous, it is impossible to think around it. ..there is virtually no form of discourse about intelligent human behavior that proceeds without employing this metaphor, just as no form of discourse about intelligent human behavior could proceed in certain eras and cultures without reference to a spirit or deity.”