Interesting work from Charbonneau et al. in macaque monkeys on the affective (gentle, pleasant) touch pathways that in humans use a different neural network than pathways of discriminative touch:
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Friday, May 17, 2024
Evolutionarily conserved neural responses to affective touch transcend consciousness and change with age
Wednesday, May 15, 2024
Collective behavior from surprise minimization
A fascinating model for collective behavior from Heins et al.:
Significance
Abstract
Wednesday, September 27, 2023
Memory for stimulus sequences unique to humans?
Identifying cognitive capacities underlying the human evolutionary transition is challenging, and many hypotheses exist for what makes humans capable of, for example, producing and understanding language, preparing meals, and having culture on a grand scale. Instead of describing processes whereby information is processed, recent studies have suggested that there are key differences between humans and other animals in how information is recognized and remembered. Such constraints may act as a bottleneck for subsequent information processing and behavior, proving important for understanding differences between humans and other animals. We briefly discuss different sequential aspects of cognition and behavior and the importance of distinguishing between simultaneous and sequential input, and conclude that explicit tests on non-human great apes have been lacking. Here, we test the memory for stimulus sequences-hypothesis by carrying out three tests on bonobos and one test on humans. Our results show that bonobos’ general working memory decays rapidly and that they fail to learn the difference between the order of two stimuli even after more than 2,000 trials, corroborating earlier findings in other animals. However, as expected, humans solve the same sequence discrimination almost immediately. The explicit test on whether bonobos represent stimulus sequences as an unstructured collection of memory traces was not informative as no differences were found between responses to the different probe tests. However, overall, this first empirical study of sequence discrimination on non-human great apes supports the idea that non-human animals, including the closest relatives to humans, lack a memory for stimulus sequences. This may be an ability that sets humans apart from other animals and could be one reason behind the origin of human culture.
Friday, May 12, 2023
Virality
This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the seventh section of her book, titled "Virality"
The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.
The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.
When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.
The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.
Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”
If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.
If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.” She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”
Friday, May 05, 2023
The Data Deluge - Dataism
This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the sixth section of her book, titled "Algorithm."
Chapter 11
In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.
What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.
These mathematical tools can predict and understand the world more adequately than any theory could. Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...
Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”
Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.
Chapter 12
Here are a smaller number of clips from the last section of Chapter 12, on the errors of algorithms.
It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.
It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.
For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.
This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.
Monday, May 01, 2023
Panpsychism and Metonymy
This post is the seventh installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 9 and 10 from the fifth section of her book, titled "Metonymy"
Chapter 9
Panpsychism has surfaced from time to time over the centuries, as in the philosophy of Bertrand Russell and Arthur Eddington, who realized that the two most notable “gaps” in physicalism—the problem of consciousness and the “problem of intrinsic natures” (the question of what matter is)—could be solved in one fell swoop. Physics could not tell us what matter was made out of, and nobody could understand what consciousness was, so maybe consciousness was, in fact, the fundamental nature of all matter. Mental states were the intrinsic nature of physical states…The impasse surrounding the hard problem of consciousness and the weirdness of the quantum world has created a new openness to the notion that the mind should have never been excluded from the physical sciences in the first place.
Some neuroscientists have arrived at panpsychism not through philosophy but via information theory. One of the leading contemporary theories of consciousness is integrated information theory, or IIT. Pioneered by Giulio Tononi and Christof Koch…IIT holds that consciousness is bound up with the way that information is “integrated” in the brain. Information is considered integrated when it cannot be easily localized but instead relies on highly complex connections across different regions of the brain…They have come up with a specific number, Φ, or phi, which they believe is a threshold and is designed to measure the interdependence of different parts of a system…many other creatures have a nonzero level of phi, which means that they too are conscious—as are atoms, quarks, and some single-celled organisms…Unlike emergentism and other systems theories that cleverly redefine terms like “consciousness” and “cognition” so that they apply to forests and “insect colonies, panpsychists believe that these entities truly possess some kind of phenomenal experience—that it feels like something to be a mouse, an amoeba, or a quark…Although the theory is still a minority position within academia, there is undoubtedly more openness today to theories that upturn modern orthodoxies to extend consciousness down the chain of being.
“While popular debates about the theory rarely extend beyond the plausibility of granting consciousness to bees and trees, it contains far more radical implications. To claim that reality itself is mental is to acknowledge that there exists no clear boundary between the subjective mind and the objective world. When Bacon denounced our tendency to project inner longings onto scientific theories, he took it for granted—as most of us do today—that the mind is not part of the physical world, that meaning is an immaterial idea that does not belong to objective reality. But if consciousness is the ultimate substrate of everything, these distinctions become blurred, if not totally irrelevant. It’s possible that there exists a symmetry between our interior lives and the world at large, that the relationship between them is not one of paradox but of metonymy—the mind serving as a microcosm of the world’s macroscopic consciousness. Perhaps it is not even a terrible leap to wonder whether the universe can communicate with us, whether life is full of “correspondences,” as the spiritualists called them, between ourselves and the transcendent realm—whether, to quote Emerson, “the whole of nature is a metaphor of the human mind.
Although integrated information theory is rooted in longstanding analogies between the brain and digital technologies, it remains uncertain whether the framework allows for machine consciousness. Koch argues that nothing in ITT necessitates that consciousness is unique to organic forms of life… So long as a system meets the minimum requirements of integrated information, it could in principle become conscious, regardless of whether it’s made of silicon or brain tissue. However, most digital computers have sparse and fragmented connectivity that doesn’t allow for a high level of integration.
One of the central problems in panpsychism is the “combination problem.” This is the challenge of explaining how conscious microsystems give way to larger systems of unified consciousness. If neurons are conscious—and according to Koch they have enough phi for “an itsy-bitsy amount of experience”—and my brain is made of billions of neurons, then why do I have only one mind and not billions? Koch’s answer is that a system can be conscious only so long as it does not contain and is not contained within something with a higher level of integration. While individual neurons cultured in a petri dish might be conscious, the neurons in an actual brain are not, because they are subsumed within a more highly integrated system...This is why humans are conscious while society as a whole is not. Although society is the larger conglomerate, it is less integrated than the human brain, which is why humans do not become swallowed up in the collective consciousness the way that neurons do.
It is, however, undeniable that society is becoming more and more integrated. Goff pointed out recently that if IIT is correct, then social connectivity is a serious existential threat. Assuming that the internet reaches a point where its information is more highly integrated than that of the human brain, it would become conscious, while all our individual human brains would become absorbed into the collective mind. “Brains would cease to be conscious in their own right,” Goff writes, “and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.” Goff likens this scenario to the visions of Pierre Teilhard de Chardin, the French Jesuit priest who, as we’ve seen, prophesied the coming Omega Point and inspired aspects of transhumanism. Once humanity is sufficiently connected via our information technologies, Teilhard predicted, we will all fuse into a single universal mind—the noosphere—enacting the Kingdom of Heaven that Christ promised.
This is already happening, of course, at a pace that is largely imperceptible - in the speed with which ideas go viral, cascading across social platforms, such that the users who share them begin to seem less like agents than as hosts, nodes in the enormous brain…in the efficiency of consensus, the speed with which opinions fuse and solidify alongside the news cycle, like thought coalescing in the collective consciousness. We have terms that attempt to catalogue this merging—the “hive mind,” “groupthink” - times when I become aware of my own blurred boundaries, seized by the suspicion that I am not forming new opinions so much as assimilating them…I don’t know what to call this state of affairs, but it does not feel like the Kingdom of God.
Chapter 10
From the end of the chapter:
“Idealism and panpsychism are appealing in that they offer a way of believing once again in the mind—not as an illusion or an epiphenomenon but as a feature of our world that is as real as anything else. But its proponents rarely stop there. In some cases they go on to make the larger claim that there must therefore exist some essential symmetry between the mind and the world, that the patterns we observe in our interior lives correspond to a more expansive, transcendent truth. Proponents of these theories occasionally appeal to quantum physics to argue that the mind-matter dichotomy is false—clearly there exists some mysterious relationship between the two. But one could just as easily argue that physics has, on the contrary, affirmed this chasm, demonstrating that the world at its most fundamental level is radically other than ourselves—that the universe is, as Erwin Schrödinger put it, “not even thinkable.”
This is precisely the modern tension that Arendt calls attention to in The Human Condition. On the one hand, the appearance of order in the world—the elegance of physical laws, the usefulness of mathematics—tempts us to believe that our mind is made is made in its image, that “the same patterns rule the macrocosm and the microcosm alike.” In the enchanted world order was seen as proof of eternal unity, evidence that God was present in all things, but for the modern person this symmetry leads inevitably back to Cartesian doubt—the suspicion that the order perceived stems from some mental deception. We have good reason to entertain such suspicions, Arendt argues. Since Copernicus and Galileo, science has overturned the most basic assumptions about reality and suggested that our sensory perception is unreliable. This conclusion became unavoidable with the discovery of general relativity and quantum physics, which suggest that “causality, necessity, and lawfulness are categories inherent in the human brain and applicable only to the common-sense experiences of earthbound creatures.” We keep trying to reclaim the Archimedean point, hoping that science will allow us to transcend the prison of our perception and see the world objectively. But the world that science reveals is so alien and bizarre that whenever we try to look beyond our human vantage point, we are confronted with our own reflection. “It is really as though we were in the hands of an evil spirit,” Arendt writes, alluding to Descartes’s thought experiment, “who mocks us and frustrates our thirst for knowledge, so that whenever we search for that which we are not, we encounter only the “patterns of our own minds.”
That is not to say that the Archimedean point is no longer possible. In her 1963 essay “The Conquest of Space and the Stature of Man,” Arendt considers this modern problem in light of emerging technologies. The oddest thing, she notes, is that even though our theories about the world are limited and simplistic and probably wrong, they “work” when implemented into technologies. Despite the fact that nobody understands what quantum mechanics is telling us about the world, the entire computer “age—including every semiconductor, starting with the very first transistor, built in 1947—has rested on well-modeled quantum behavior and reliable quantum equations. The problem is not merely that we cannot understand the world but that we can now build this alien logic into our devices. There are some scientists, Arendt notes, who claim that computers can do “what a human brain cannot comprehend.” Her italics are instructive: it’s not merely that computers can transcend us in sheer brain power—solving theorems faster than we can, finding solutions more efficiently—but that they can actually understand the world in a way that we cannot. She found this proposition especially alarming. “If it should be true…that we are surrounded by machines whose doings we cannot comprehend although we have devised and constructed them,” she writes, “it would mean that the theoretical perplexities of the natural sciences on the highest level have invaded our everyday world.” This conclusion was remarkably prescient.”
Friday, April 28, 2023
Are we living in a simulated world?
This post is the sixth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 8 from the fourth section of her book, titled "Paradox."
Bostrom, a prominent transhumanist, believes that humanity is in the process of becoming posthuman as we merge our bodies with technology. We are becoming superintelligence ourselves. His simulation hypothesis begins by imagining a future, many generations from now, when posthumans have achieved an almost godlike mastery over the world. One of the things these posthumans might do, Bostrom proposes, is create simulations—digital environments that contain entire worlds…The inhabitants will not know that they are living in a simulation but will believe their world is all that exists…the theory’s popularity has escalated over the past decade or so. It has gained an especially fervent following among scientists and Silicon Valley luminaries, including Neil deGrasse Tyson and Elon Musk, who have come out as proponents…It has become, in other words, the twenty-first century’s favored variation on Descartes’s skeptical thought experiment—the proposition that our minds are lying to us, that the world is radically other than it seems.
…for all its supposed “naturalism,” the simulation hypothesis is ultimately an argument from design. It belongs to a long lineage of creationist rhetoric that invoke human technologies to argue that the universe could not have come about without the conscious intention of a designer.,.Bostrom acknowledged in his paper that there were “some loose analogies” that could be drawn between the simulation hypothesis and traditional religious concepts. The programmers who created the simulation would be like gods compared to those of us within the simulation.
One of the common objections to the informational universe is that information cannot be “ungrounded,” without a material instantiation. Claude Shannon, the father of information theory, insisted that information had to exist in some kind of physical medium, like computer hardware…if the universe were an enormous computer, then this information would in fact be instantiated on something material, akin to a hard drive. We wouldn’t be able to see or detect it because it would exist in the universe of the programmers who built it. All we would notice was its higher-level structure, the abstract patterns and laws that were part of its software. The simulation hypothesis, in other words, could explain why our universe is imbued with discernible patterns and mathematical regularities while also explaining how those patterns could be rooted in something more than mere abstractions. Perhaps Galileo was not so far off when he imagined the universe as a book written by God in the language of mathematics. The universe was software written by programmers in the binary language of code…“if you take this thesis to its conclusion, it doesn’t really explain anything about the universe or its origins. Presumably there is still some original basement-level reality at its foundation—there could be no true infinite regress—occupied by first posthumans who created the very first technological simulation. But these posthumans were just our descendants—or the descendants of some other species that had evolved on another planet—and so the question about origins remained unchanged, only pushed back one degree. Where did the universe originally come from?
Bohr …observed that humans are incapable of understanding the world beyond “our necessarily prejudiced conceptual frame.” And perhaps it can explain why the multiverse theory and other attempts to transcend our anthropocentric outlook so seem a form of bad faith, guilty of the very hubris they claim to reject. There is no Archimedean point, no purely objective vista that allows us to transcend our human interests and see the world from above, as we once imagined it appeared to God. It is our distinctive vantage that binds us to the world and sets the necessary limitations that are required to make sense of it. This is true, of course, regardless of which interpretation of physics is ultimately correct. It was Max Planck, the physicist who struggled more than any other pioneer of quantum theory to accept the loss of a purely objective worldview, who acknowledged that the central problems of physics have always been reflexive. “Science cannot solve the ultimate mystery of nature,” he wrote in 1932. “And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.
Wednesday, April 26, 2023
Is the mind a reliable mirror of reality? The marriage of physics and information theory
This post is the fifth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 7 from the fourth section of her book, titled "Paradox."
Is the mind a reliable mirror of reality? Do the patterns we perceive belong to the objective world, or are they merely features of our subjective experience? Given that physics was founded on the separation of mind and matter, subject and object, it’s unsurprising that two irreconcilable positions that attempt to answer this question have emerged: one that favors subjectivity, the other objectivity. Bohr’s view was that quantum physics describes our subjective experience of the world; it can tell us only about what we observe. Mathematical equations like the wave function are merely metaphors that translate this bizarre world into the language of our perceptual interface—or, to borrow Kant’s analogy, spectacles that allow us to see the chaotic world in a way that makes sense to our human minds. Other interpretations of physics, like the multiverse theory or string theory, regard physics not as a language we invented but as a description of the real, objective world that exists out there, independent of us. Proponents of this view tend to view equations and physical laws as similarly transcendent, corresponding to literal, or perhaps even Platonic, realities.
The marriage of physics and information theory is often attributed to John Wheeler, the theoretical physicist who pioneered, with Bohr, the basic principles of nuclear fission. In the late 1980s, Wheeler realized that the quantum world behaved a lot like computer code. An electron collapsed into either a particle or a wave depending on how we interrogated it. This was not dissimilar from the way all messages can be simplified into “binary units,” or bits, which are represented by zeros and ones. Claude Shannon, the father of information theory, had defined information as “the resolution of uncertainty,” which seemed to mirror the way quantum systems existed as probabilities that collapsed into one of two states. For Wheeler these two fields were not merely analogous but ontologically identical. In 1989 he declared that “all things physical are information-theoretic in origin.
In a way Wheeler was exploiting a rarely acknowledged problem that lies at the heart of physics: it’s uncertain what matter actually is. Materialism, it is often said, is not merely an ontology but a metaphysics—an attempt to describe the true nature of things. What materialism says about our world is that matter is all that exists: everything is made of it, and nothing exists outside of it. And yet, ask a physicist to describe an electron or a quark, and he will speak only of its properties, its position, its behavior—never its essence.
Wheeler’s answer was that matter itself does not exist. It is an illusion that arises from the mathematical structures that undergird everything, a cosmic form of information processing. Each time we make a measurement we are creating new information—we are, in a sense, creating reality itself. Wheeler called this the “participatory universe,” a term that is often misunderstood as having mystical “connotations, as though the mind has some kind of spooky ability to generate objects. But Wheeler did not even believe that consciousness existed. For him, the mind itself was nothing but information. When we interacted with the world, the code of our minds manipulated the code of the universe, so to speak. It was a purely quantitative process, the same sort of mathematical exchange that might take place between two machines.
While this theory explains, or attempts to explain, how the mind is able to interact with matter, it is a somewhat evasive solution to the mind-body problem, a sleight of hand that discards the original dichotomy by positing a third substance—information—that can explain both. It is difficult, in fact, to do justice to how entangled and self-referential these two fields—information theory and physics—have become, especially when one considers their history. The reason that cybernetics privileged relationships over content in the first place was so that it could explain things like consciousness purely in terms of classical physics, which is limited to describing behavior but not essence—“doing” but not “being.” When Wheeler merged information theory with quantum physics, he was essentially closing the circle, proposing that the hole in the material worldview—intrinsic essence—could be explained by information itself.
Seth Lloyd, an MIT professor who specializes in quantum information, insists that the universe is not like a computer but is in fact a computer. “The universe is a physical system that contains and processes information in a systematic fashion,” he argues, “and that can do everything a computer can do.” Proponents of this view often point out that recent observational data seems to confirm it. Space-time, it turns out, is not smooth and continuous, as Einstein’s general relativity theory assumed, but more like a grid made up of minuscule bits—tiny grains of information that are not unlike the pixels of an enormous screen. Although we experience the world in three dimensions, it seems increasingly likely that all the information in the universe arises from a two-dimensional field, much like the way holograms work, or 3-D films.
When I say that I try very hard to avoid the speculative fringe of physics, this is more or less what I am talking about. The problem, though, is that once you’ve encountered these theories it is difficult to forget them, and the slightest provocation can pull you back in. It happened a couple years ago, while watching my teenage cousin play video games at a family gathering. I was relaxed and a little bored and began thinking about the landscape of the game, the trees and the mountains that made up the backdrop. The first-person perspective makes it seem like you’re immersed in a world that is holistic and complete, a landscape that extends far beyond the frame, though in truth each object is generated as needed. Move to the right and a tree is “generated; move to the left and a bridge appears, creating the illusion that it was there all along. What happened to these trees and rocks and mountains when the player wasn’t looking? They disappeared—or no, they were never there to begin with; they were just a line of code. Wasn’t this essentially how the observer effect worked? The world remained in limbo, a potentiality, until the observer appeared and it was compelled to generate something solid. Rizwan Virk, a video game programmer, notes that a core mantra in programming is “only render that which is being observed.”
Couldn’t the whole canon of quantum weirdness be explained by this logic? Software programs are never perfect. Programmers cut corners for efficiency—they are working, after all, with finite computing power; even the most detailed systems contain areas that are fuzzy, not fully sketched out. Maybe quantum indeterminacy simply reveals that we’ve reached the limits of the interface. The philosopher Slavoj Žižek once made a joke to this effect. Perhaps, he mused, God got a little lazy when he was creating the universe, like the video game programmer who doesn’t bother to meticulously work out the interior of a house that[ “the player is not meant to enter. “He stopped at a subatomic level,” he said, “because he thought humans would be too stupid to progress so far.”
Monday, April 24, 2023
Networks and Emergentism
This post is the fourth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 5 and 6 form the third section of her book, titled "Network."
From Chapter 5:
When it comes to biological systems like forests and swarms, emergent behavior that appears to be unified and intelligent can exist without a centralized control system like a brain. But the theory has also been applied to the brain itself, as a way to account for human consciousness. Although most people tend to think of the brain as the body’s central processing unit, the organ itself has no central control. Philosophers and neuroscientists often point out that our belief in a unified interior self—the illusion, as Richard Dawkins once put it, that we are “a unit, not a colony”—has no basis in the actual architecture of the brain. Instead there are only millions of unconscious parts that conspire, much like a bee colony, to create a “system” that is intelligent. Emergentism often entails that consciousness isn’t just in the head; it emerges from the complex relationships that exist throughout the body, and also from the interactions between the body and its environment.
Although emergentism is rooted in physicalism, critics have often claimed that there is something inherently mystical about the theory, particularly when these higher-level patterns are said to be capable of controlling or directing physical processes...few emergentists have managed to articulate precisely what kind of structure might produce consciousness in machines; in some cases the mind is posited simply as a property of “complexity,” a term that is eminently vague. Some critics have argued that emergentism is just an updated version of vitalism—the ancient notion that the world is animated by a life force or energy that permeates all things…Although emergentism is focused specifically on consciousness, as opposed to life itself, the theory is vulnerable to the same criticism that has long haunted vitalism: it is an attempt to get “something from nothing.” It hypothesizes some additional, invisible power that exists within the mechanism, like a ghost in the machine.
…emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity. At some point nature will step in and finish the job…aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself.
From Chapter 6:
…once the world was a sacred and holy place, full of chatty and companionable objects—rocks and trees that were capable of communicating with us—we now live in a world that has been rendered mute… some disenchantment narratives place the fall from grace not with the Enlightenment and the rise of modern science but with the emergence of monotheism. The very notion of imago dei, with humanity created in God’s image and given “dominion” over creation, has linked human exceptionalism with the degradation of the natural world. Is it possible to go back? Or are these narratives embedded so deeply in the DNA of our ontological assumptions that a return is impossible? This is especially difficult when it comes to our efforts to create life from ordinary matter…In the orthodox forms of Judaism and Christianity, the ability to summon life from inert matter is denounced as paganism, witchcraft, or idolatry.
Just as the golems were sculpted out of mud and animated with a magical incantation, so the hope persists that robots built from material parts will become inhabited by that divine breath…While these mystical overtones should not discredit emergence as such—it is a useful enough way to describe complex systems like beehives and climates—the notion that consciousness can emerge from machines does seem to be a form of wishful thinking, if only because digital technologies were built on the assumption that consciousness played no role in the process of intelligence. Just as it is somewhat fanciful to believe that science can explain consciousness when modern science itself was founded on the exclusion of the mind, it is difficult to believe that technologies designed specifically to elide the notion of the conscious subject could possibly come to develop an interior life.
To dismiss emergentism as sheer magic is to ignore the specific ways in which it differs from the folklore of the golems—even as it superficially satisfies the same desire. Scratch beneath the mystical surface and it becomes clear that emergentism is often not so different from the most reductive forms of materialism, particularly when it comes to the question of human consciousness. Plant intelligence has been called a form of “mindless mastery,” and most emergentists view humans as similarly mindless. We are not rational agents but an encasement of competing systems that lack any sort of unity or agency. Minsky once described the mind as “a sort of tangled-up bureaucracy” whose parts remain ignorant of one another.
Just as the intelligence of a beehive or a traffic jam resides in the patterns of these inert, intersecting parts, so human consciousness is merely the abstract relationships that emerge out of these systems: once you get to the lowest level of intelligence, you inevitably find, as Minsky put it, agents that “cannot think at all.” There is no place in this model for what we typically think of as interior experience, or the self.
Embodied artificial intelligence is being pursued in laboratories using humanoid robots equipped with sensors and cameras that endow the robots with sensory functions and motor skills. The theory is that these sensorimotor capacities will eventually lead to more advanced cognitive skills, such as a sense of self or the ability to use language, though so far this has not happened.
Friday, April 21, 2023
Equivalence of the metaphors of the major religions and transhumanism
This post is the third installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 3 and 4 form the second section of her book, titled "Pattern."
From Chapter 3:
Once animal brains began to form, the information became encoded in neural patterns. Now that evolution has produced intelligent, tool-wielding humans, we are designing new information technologies more sophisticated than any object the world has yet seen. These technologies are becoming more complex and powerful each year, and very soon they will transcend us in intelligence. The ‘transhumanist’ movement believes that the only way for us to survive as humans is to begin merging our bodies with these technologies, transforming ourselves into a new species—what Kurzweil calls “posthumans,” or spiritual machines. Neural implants, mind-uploading, and nanotechnology will soon be available, he promises. With the help of these technologies, we will be able to transfer or “resurrect” our minds onto supercomputers, allowing us to become immortal. Our bodies will become incorruptible, immune to disease and decay, and each person will be able to choose a new, customizable virtual physique.
From Chapter 4:
…how is it that the computer metaphor—an analogy that was expressly designed to avoid the notion of a metaphysical soul - has returned to us ancient religious ideas about physical transcendence and the disembodied spirit?
In his book “You Are Not a Gadget”, the computer scientist Jaron Lanier argues that just as the Christian belief in an immanent Rapture often conditions disciples to accept certain ongoing realities on earth—persuading them to tolerate wars, environmental destruction, and social inequality—so too has the promise of a coming Singularity served to justify a technological culture that privileges information over human beings. “If you want to make the transition from the old religion, where you hope God will give you an afterlife,” Lanier writes, “to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive.” This sacralizing of information is evident in the growing number of social media platforms that view their human users as nothing more than receptacles of data. It is evident in the growing obsession with standardized testing in public schools, which is designed to make students look good to an algorithm. It is manifest in the emergence of crowd-sourced sites such as Wikipedia, in which individual human authorship is obscured so as to endow the content with the transcendent aura of a holy text. In the end, transhumanism and other techno-utopian ideas have served to advance what Lanier calls an “antihuman approach to computation,” a digital climate in which “bits are presented as if they were alive, while humans are transient fragments.
In a way we are already living the dualistic existence that Kurzweil promised. In addition to our physical bodies, there exists—somewhere in the ether—a second self that is purely informational and immaterial, a data set of our clicks, purchases, and likes that lingers not in some transcendent nirvana but rather in the shadowy dossiers “of third-party aggregators. These second selves are entirely without agency or consciousness; they have no preferences, no desires, no hopes or spiritual impulses, and yet in the purely informational sphere of big data, it is they, not we, that are most valuable and real.
He too found an “essential equivalence” between transhumanist metaphors and Christian metaphors: both systems of thought placed a premium value on consciousness. The nature of consciousness—as well as the question of who and what is conscious—is the fundamental philosophical question, he said, but it’s a question that cannot be answered by science alone. This is why we need metaphors. “religion deals with legitimate questions but the major religions emerged in pre-scientific times so that the metaphors are pre-scientific. That the answers to existential questions are necessarily metaphoric is necessitated by the fact that we have to transcend mere matter and energy to find answers…The difference between so-called atheists and people who believe in “God” is a matter of the choice of metaphor, and we could not get through our lives without having to choose metaphors for transcendent questions.
Perhaps all these efforts—from the early Christians’ to the medieval alchemists’ to those of the luminaries of Silicon Valley—amounted to a singular historical quest, one that was expressed through analogies that were native to each era. Perhaps our limited vantage as humans meant that all we could hope for were metaphors of our own making, that we would continually grasp at the shadow of absolute truths without any hope of attainment.
Wednesday, April 19, 2023
The Illusion of the Self as Humans become Gods.
This post continues my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. This post deals with Chapter 2 from the first section of the book, 'Image.' I’m discontinuing the experiment of including Chat GPT 4 condensations of the excerpts. Here are the clips:
It turns out that computers are particularly adept at the tasks that we humans find most difficult: crunching equations, solving logical propositions, and other modes of abstract thought. What artificial intelligence finds most difficult are the sensory perceptive tasks and motor skills that we perform unconsciously: walking, drinking from a cup, seeing and feeling the world through our senses. Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.
If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought.”
“Metaphors are typically abandoned once they are proven to be insufficient. But in some cases, they become only more entrenched: the limits of the comparison come to redefine the concepts themselves. This latter tactic has been taken up by the eliminativists, philosophers who claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent”
The eliminativist philosophers claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent. Chief among the deniers is Daniel Dennett, who has often insisted that the mind is illusory. Dennett refers to the belief in interior experience derisively as the “Cartesian theater,” invoking the popular delusion—again, Descartes’s fault—that there exists in the brain some miniature perceiving entity, a homunculus that is watching the brain’s representations of the external world projected onto a movie screen and making decisions about future actions. One can see the problem with this analogy without any appeals to neurobiology: if there is a homunculus in my brain, then it must itself (if it is able to perceive) contain a still smaller homunculus in its head, and so on, in infinite regress.
Dennett argues that the mind is just the brain and the brain is nothing but computation, unconscious all the way down. What we experience as introspection is merely an illusion, a made-up story that causes us to think we have “privileged access” to our thinking processes. But this illusion has no real connection to the mechanics of thought, and no ability to direct or control it. Some proponents of this view are so intent on avoiding the sloppy language of folk psychology that any any reference to human emotions and intentions is routinely put in scare quotes. We can speak of brains as “thinking,” “perceiving,” or “understanding” so long as it’s clear that these are metaphors for the mechanical processes. “The idea that, in addition to all of those, there is this extra special something—subjectivity—that distinguishes us from the zombie,” Dennett writes, “that’s an illusion.”
Perhaps it’s true that consciousness does not really exist—that, as Brooks put it, we “overanthropomorphize humans.” If I am capable of attributing life to all kinds of inanimate objects, then can’t I do the same to myself? In light of these theories, what does it mean to speak of one’s “self” at all?
Monday, April 17, 2023
Disenchantment of the world and the computational metaphors of our times.
I am doing a second reading of Meghan O’Gieblyn’s book “Gods, Humans, Animals, Machines” and extracting clips of text that I find most interesting. I’m putting them in a MindBlog post, hoping they will be interesting to some readers, and also because MindBlog is my personal archive of things I want to remember I've engaged. At least I will know where to look up something I'm trying to recall.
O’Gieblyn’s text has bursts of magisterial insight interspersed with details of her personal experiences and travails, and the clips try to capture my biased selection of the former.
The first section of her book “Image”, has two Chapters, and this post passes on Chapter 1, starting with the result of my asking ChatGPT 4 to summarize my clips in approximately 1000 words. It generated the ~300 words below, and I would urge you to continue reading my clips (976 words), which provide a more rich account. Subsequent posts in this series will omit Chat GPT summaries, unless they generate something that blows me away.
Here is Chat GPT 4’s summary:
The concept of the soul has become meaningless in modern times, reduced to a dead metaphor that no longer holds any real significance. This is due to the process of disenchantment that has taken place since the dawn of modern science, which has turned the world into a subject of investigation and reduced everything to the causal mechanism of physical laws. This has led to a world that is devoid of the spirit-force that once infused and unified all living things, leaving us with an empty carapace of gears and levers. However, the questions that were once addressed by theologians and philosophers persist in conversations about digital technologies, where artificial intelligence and information technologies have absorbed them.
Humans have a tendency to see themselves in all beings, as evidenced by our habit of attributing human-like qualities to inanimate objects. This has led to the development of the idea of God and the belief that natural events are signs of human agency. This impulse to see human intention and purpose in everything has resulted in a projection of our image onto the divine, which suggests that metaphors are two-way streets and that it is not always easy to distinguish between the source domain and the target.
The development of cybernetics and the application of the computational analogy to the mind has resulted in the description of the brain as the hardware that runs the software of the mind, with cognitive systems being spoken of as algorithms. However, the use of metaphors like these can lead to a limiting of our understanding of the world and how we interact with it. As cognitive linguist George Lakoff notes, when an analogy becomes ubiquitous, it can be difficult to think around it, and it structures how we think about the world.
And here are the text clips I asked ChatGPT 4 to summarize
It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons
We live in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden.” In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”
If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden.
To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.
Animism was built into our design. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. Our brains can’t fundamentally distinguish between interacting with people and interacting with devices. Our habit of seeing our image everywhere in the natural world is what gave birth to the idea of God. Early civilizations assumed that natural events bore signs of human agency. Earthquakes happened because the gods were angry. Famine and drought were evidence that the gods were punishing them. Because human communication is symbolic, humans were quick to regard the world as a system of signs, as though some higher being were seeking to communicate through “natural events. Even the suspicion that the world is ordered, or designed, speaks to this larger impulse to see human intention and human purpose in every last quirk of “creation.”
There is evidently no end to our solipsism. So deep is our self-regard that we projected our image onto the blank vault of heaven and called it divine. But this theory, if true, suggests a deeper truth: metaphors are two-way streets. It is not so easy to distinguish the source domain from the target, to remember which object is the original and which is modeled after its likeness. The logic can flow in either direction. For centuries we said we were made in God’s image, when in truth we made him in ours.”
Shannon removed the thinking mind from the concept of information. Meanwhile, McCulloch applied the logic of information processing to the mind itself. This resulted in a model of mind in which thought could be accounted for in purely abstract, mathematical terms, and opened up the possibility that computers could execute mental functions. If thinking was just information processing, computers could be said to “learn,” “reason,” and “understand”—words that were, at least in the beginning, put in quotation marks to denote them as metaphors. But as cybernetics evolved and the computational analogy was applied across a more expansive variety of biological and artificial systems, the limits of the metaphor began to dissolve, such that it became increasingly difficult to tell the difference between matter and form, medium and message, metaphor and reality. And it became especially difficult to explain aspects of the mind that could not be accounted for by the metaphor.
The brain is often described today as the hardware that “runs” the software of the mind. Cognitive systems are spoken of as algorithms: vision is an algorithm, and so are attention, language acquisition, and memory.
In 1999 the cognitive linguist George Lakoff noted that the analogy had become such a given that neuroscientists “commonly use the Neural Computation metaphor without noticing that it is a metaphor.” He found this concerning. Metaphors, after all, are not merely linguistic tools; they structure how we think about the world, and when “an analogy becomes ubiquitous, it is impossible to think around it. ..there is virtually no form of discourse about intelligent human behavior that proceeds without employing this metaphor, just as no form of discourse about intelligent human behavior could proceed in certain eras and cultures without reference to a spirit or deity.”
Wednesday, April 05, 2023
The fundamentals of empathy
Emotional contagion is the most ancestral form of empathy. We tested to what extent the proximate mechanisms of emotional contagion are evolutionarily conserved by assessing the role of oxytocin, known to regulate empathic behaviors in mammals, in social fear contagion in zebrafish. Using oxytocin and oxytocin receptor mutants, we show that oxytocin is both necessary and sufficient for observer zebrafish to imitate the distressed behavior of conspecific demonstrators. The brain regions associated with emotional contagion in zebrafish are homologous to those involved in the same process in rodents (e.g., striatum, lateral septum), receiving direct projections from oxytocinergic neurons located in the pre-optic area. Together, our results support an evolutionary conserved role for oxytocin as a key regulator of basic empathic behaviors across vertebrates.
Friday, March 03, 2023
Empathy lost and regained in a mouse model of dementia
A PNAS Journal Club article by Carolyn Beans points to work by Yao and colleagues that shows that a loss of empathy that is especially problematic for those experiencing frontotemporal dementia (FTD...a rare condition that often develops earlier in life than other types of dementia) can be linked to slowed activity in a particular brain region of a mouse model of FTD. When Yao and colleagues experimentally increased brain activity, empathy returned. Here is the technical abstract of Yao and collaborators:
Highlights
• Mice display dmPFC-dependent emotional contagion and other-directed consolation
• Emotional contagion and other-directed consolation are blunted in aged c9FTD mice
• Aged c9FTD mice exhibit reduced pyramidal neuron excitability in the dmPFC
• Enhancing dmPFC activity rescues empathy loss in aged c9FTD miceSummary
Empathic function is essential for the well-being of social species. Empathy loss is associated with various brain disorders and represents arguably the most distressing feature of frontotemporal dementia (FTD), a leading form of presenile dementia. The neural mechanisms are unknown. We established an FTD mouse model deficient in empathy and observed that aged somatic transgenic mice expressing GGGGCC repeat expansions in C9orf72, a common genetic cause of FTD, exhibited blunted affect sharing and failed to console distressed conspecifics by affiliative contact. Distress-induced consoling behavior activated the dorsomedial prefrontal cortex (dmPFC), which developed profound pyramidal neuron hypoexcitability in aged mutant mice. Optogenetic dmPFC inhibition attenuated affect sharing and other-directed consolation in wild-type mice, whereas chemogenetically enhancing dmPFC excitability rescued empathy deficits in mutant mice, even at advanced ages when substantial cortical atrophy had occurred. These results establish cortical hypoexcitability as a pathophysiological basis of empathy loss in FTD and suggest a therapeutic strategy.
Wednesday, February 22, 2023
A fish passes the mirror self recognition test!
Some animals have the remarkable capacity for mirror self-recognition (MSR), yet any implications for self-awareness remain uncertain and controversial. This is largely because explicit tests of the two potential mechanisms underlying MSR are still lacking: mental image of the self and kinesthetic visual matching. Here, we test the hypothesis that MSR ability in cleaner fish, Labroides dimidiatus, is associated with a mental image of the self, in particular the self-face, like in humans. Mirror-naive fish initially attacked photograph models of both themselves and unfamiliar strangers. In contrast, after all fish had passed the mirror mark test, fish did not attack their own (motionless) images, but still frequently attacked those of unfamiliar individuals. When fish were exposed to composite photographs, the self-face/unfamiliar body were not attacked, but photographs of unfamiliar face/self-body were attacked, demonstrating that cleaner fish with MSR capacity recognize their own facial characteristics in photographs. Additionally, when presented with self-photographs with a mark placed on the throat, unmarked mirror-experienced cleaner fish demonstrated throat-scraping behaviors. When combined, our results provide clear evidence that cleaner fish recognize themselves in photographs and that the likely mechanism for MSR is associated with a mental image of the self-face, not a kinesthetic visual-matching model. Humans are also capable of having a mental image of the self-face, which is considered an example of private self-awareness. We demonstrate that combining mirror test experiments with photographs has enormous potential to further our understanding of the evolution of cognitive processes and private self-awareness across nonhuman animals.
Monday, December 05, 2022
A collection of animal stories
I've accumulated several items on animal behaviors in my list of potential post subjects, and have decided to simply pass them on to allow animal curious MindBlog readers to have a look.
Farad Manjoo notes that ants, not humans, own the world.
Not only land animals like humans toss objects at irritating neighbors, octopuses who gather at feeding sites with abundant food and competition for it sometimes throw silt at each other.
Crows understand the concept of recursion as well as children, and better than macaque monkeys.
Wild Chimpanzees display declarative referential gesturing.
Monday, October 03, 2022
Triggers for mother love
A fascinating open source article from Margaret Livingstone carrying forward the famous experiments by Harry Harlow:
Significance
Harry Harlow found that infant monkeys form strong and lasting attachments to inanimate surrogates, but only if the surrogate is soft; here I report that postpartum monkey mothers can also form strong and lasting attachments to soft inanimate objects. Thus, mother/infant and infant/mother bonds may both be triggered by soft touch.Abstract
Previous studies showed that baby monkeys separated from their mothers develop strong and lasting attachments to inanimate surrogate mothers, but only if the surrogate has a soft texture; soft texture is more important for the infant’s attachment than is the provision of milk. Here I report that postpartum female monkeys also form strong and persistent attachments to inanimate surrogate infants, that the template for triggering maternal attachment is also tactile, and that even a brief period of attachment formation can dominate visual and auditory cues indicating a more appropriate target.
Friday, September 23, 2022
Machine learning is translating the languages of animals
Machine-learning systems, which use algorithms to detect patterns in large collections of data, have excelled at analyzing human language, giving rise to voice assistants that recognize speech, transcription software that converts speech to text and digital tools that translate between human languages.
...this technology can be deployed to decode animal communication, working towards finding a Google Translate for animals, using machine-learning algorithms to identify when squeaking mice are stressed or why fruit bats are shouting. Even more ambitious projects are underway — to create a comprehensive catalog of crow calls, map the syntax of sperm whales and even to build technologies that allow humans to talk back.
...machine-learning algorithms can spot subtle patterns that might elude human listeners...these programs can tell apart the voices of individual animals, distinguish between sounds that animals make in different circumstances and break their vocalizations down into smaller parts, a crucial step in deciphering meaning.
...the technology could also be deployed for the benefit of animals, helping experts monitor the welfare of both wild and domestic fauna. Scientists also said that they hoped that by providing new insight into animal lives, this research might prompt a broader societal shift. Many pointed to the galvanizing effect of the 1970 album “Songs of the Humpback Whale,” which featured recordings of otherworldly whale calls and has been widely credited with helping to spark the global Save the Whales movement...many scientists said they hoped these new, high-tech efforts to understand the vocalizations of whales — and crows and bats and even naked mole rats — will be similarly transformative, providing new ways to connect with and understand the creatures with whom we share the planet.
Monday, September 19, 2022
The sex of human experimenters influences mouse behaviors and neural responses.
We show that the sex of human experimenters affects mouse behaviors and responses following administration of the rapid-acting antidepressant ketamine and its bioactive metabolite (2R,6R)-hydroxynorketamine. Mice showed aversion to the scent of male experimenters, preference for the scent of female experimenters and increased stress susceptibility when handled by male experimenters. This human-male-scent-induced aversion and stress susceptibility was mediated by the activation of corticotropin-releasing factor (CRF) neurons in the entorhinal cortex that project to hippocampal area CA1. Exposure to the scent of male experimenters before ketamine administration activated CA1-projecting entorhinal cortex CRF neurons, and activation of this CRF pathway modulated in vivo and in vitro antidepressant-like effects of ketamine. A better understanding of the specific and quantitative contributions of the sex of human experimenters to study outcomes in rodents may improve replicability between studies and, as we have shown, reveal biological and pharmacological mechanisms.
Monday, September 05, 2022
Animals (including us) conjure model-based structures from random events
Superstitious learning is usually thought to be accounted for by conditioned association, but Jin et al. now show that monkeys can develop more complex cognitive structures independent of reinforcement:
Significance
Past studies on learning and decision-making usually rely on the assumption that the task is learnable. However, humans and other animals often infer spurious relationships from coincidental associations, and it is unknown if this could be achieved without reward conditioning. Here, we exposed monkeys to sets of images that had a hidden hierarchical order and to unordered sets that lacked an underlying structure. Monkeys treated the unordered sets as if they had a hierarchical order even under reward schedules that incentivized random choices. The results cannot be explained by simple associative mechanisms that account for other types of spurious learning, suggesting that when presented with random events animals conjure elaborate model-based structures.Abstract
Humans and other animals often infer spurious associations among unrelated events. However, such superstitious learning is usually accounted for by conditioned associations, raising the question of whether an animal could develop more complex cognitive structures independent of reinforcement. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a “learnable” set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an “unlearnable” set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free reinforcement learning algorithm (Q-learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference-discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus–outcome conditional learning to more cognitive model-based learning mechanisms.