Friday, June 02, 2023

Gender inequality is associated with differences between the brains of men and women

Sobering but not surprising analysis by Jugman et al.:  

Significance

Gender inequality is associated with worse mental health and academic achievement in women. Using a dataset of 7,876 MRI scans from healthy adults living in 29 different countries, we here show that gender inequality is associated with differences between the brains of men and women: cortical thickness of the right hemisphere, especially in limbic regions such as the right caudal anterior cingulate and right medial orbitofrontal, as well as the left lateral occipital, present thinner cortices in women compared to men only in gender-unequal countries. These results suggest a potential neural mechanism underlying the worse outcome of women in gender-unequal settings, as well as highlight the role of the environment in the brain differences between women and men.
Abstract
Gender inequality across the world has been associated with a higher risk to mental health problems and lower academic achievement in women compared to men. We also know that the brain is shaped by nurturing and adverse socio-environmental experiences. Therefore, unequal exposure to harsher conditions for women compared to men in gender-unequal countries might be reflected in differences in their brain structure, and this could be the neural mechanism partly explaining women’s worse outcomes in gender-unequal countries. We examined this through a random-effects meta-analysis on cortical thickness and surface area differences between adult healthy men and women, including a meta-regression in which country-level gender inequality acted as an explanatory variable for the observed differences. A total of 139 samples from 29 different countries, totaling 7,876 MRI scans, were included. Thickness of the right hemisphere, and particularly the right caudal anterior cingulate, right medial orbitofrontal, and left lateral occipital cortex, presented no differences or even thicker regional cortices in women compared to men in gender-equal countries, reversing to thinner cortices in countries with greater gender inequality. These results point to the potentially hazardous effect of gender inequality on women’s brains and provide initial evidence for neuroscience-informed policies for gender equality.

Wednesday, May 31, 2023

Constructing Self and World

There is a strong similarity between the predictive processing brain model that has been the subject of numerous Mind Blog Posts, and the operations that ChatGPT and other generative pre-trained transformer algorithms are performing, with the ‘priors’ of the predictive processing model being equivalent to the ‘pre-trained’ weightings of the generative transformer algorithms.  

The open and empty awareness of the non-dual perspective corresponds to the ‘generator’ component of the AI algorithms. It is what can begin to allow reification - rendering opaque rather than transparent - the self model and other products of the underlying content-free open awareness generator (such as our perceptions of trees, interoceptive signals, cultural rules, etc.). It enables seeing rather than being the glass window through which you are viewing the tree in the yard. The rationale of non-dual awareness is not to have ‘no-self.’ The ‘self’ prior is there because it is a very useful avatar for interactions. Rather, the non-dual perspective can enable a tweaking or re-construction of previously transparent priors - now rendered opaque - that lets go of their less useful components. The point of having an expanded 'no self' is to become aware of and refine the illusions or phantasies about what is in our internal and external worlds that rise from it.  

The paragraphs above derive  from my listening to one of Sam Harris’ podcasts in his “Making Sense” series titled “Constructing Self and World.” It was a conversation with Shamil Chandaria, who is a philanthropist, serial entrepreneur, technologist, and academic with multidisciplinary research interests. During the conversation a number of ideas I am familiar with were framed in a very useful way, and I wanted to put  them down and pass on to MindBlog readers the thumbnail summary above.

 

 

Monday, May 29, 2023

To fulfill its promise, artificial intelligence needs to deepen human intelligence.

For MindBlog readers interested in AI, I have to point to another must-read article by Ezra Klein. Below are some clips that try to communicate his central points. (And no, I'm not using ChatGPT to generate this post, because of several of AI's limitations that he notes.) Klein starts by noting the many ways in which the internet has not fullfiled its promise, overwhelming us with more information than we can process, degrading our political discourse and attention spans, and leading us multitasking which not only diminished our cognitive depth but also activates our stress chemistry. He then lists several wrong directions that might be taken by large language models like OpenAI’s GPT-4 and Google’s Bard:
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up...A question to ask about large language models, then, is where does trustworthiness not matter?...A.I. will be great for creating content where reliability isn’t a concern. The personalized video games and children’s shows and music mash-ups and bespoke images will be dazzling...But where reliability matters — say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions — deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
...Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second...Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?...Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains — though real — are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds...My third concern is related to this use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing...It’s the time spent inside an article or book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters...No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that “if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.” I think the trade-off here was worth it — I am, after all, a writer — but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us. We failed that test with the internet. Let’s not fail it with A.I.

Friday, May 26, 2023

When Their Idea of Liberty Is Your Idea of Death

This post gives a snapshot of another really excellent Thomas Edsall piece, provided courtesy of Chat GPT-4, making the central points accessible to MindBlog readers who are blocked by the NYTimes paywall. 

In this essay, Thomas Edsall explores the concept of freedom as a contested ideal in American politics, particularly in the run-up to the 2024 election.

President Biden, in announcing his re-election bid, frames freedom as being under threat by MAGA extremists. He emphasizes defending democracy, personal freedom, and civil rights, portraying the election as a choice between more or less freedom, more or fewer rights.

The essay also highlights contrasting views from Florida Governor Ron DeSantis, who views the threat to freedom as coming not only from government actions, but also from powerful institutions pushing a "woke" agenda. DeSantis criticizes elites in federal bureaucracy, media, Big Tech, and universities for using undemocratic means to impose their ideology on society.

Edsall cites the political philosopher Isaiah Berlin's two concepts of liberty: negative freedom, being free from interference by others, and positive freedom, the individual's desire to be their own master. This dichotomy is mirrored in the divergent notions of freedom espoused by figures such as George Wallace and Martin Luther King Jr.

The essay also examines differing political interpretations of freedom, from Biden's liberal tradition linked to F.D.R., emphasizing meeting material needs and democratic institutions, to Reagan's view of freedom as being largely free from government interference. Smith argues that Trump and DeSantis' notion of freedom is more constrained and restrictive, supporting democracy only as long as it produces the results they want.

Edsall points out that both the left and the right have imposed limitations on freedom. He notes conservative Republicans' restrictions on teaching about race and sex, banning books, suppressing voting, and barring local ordinances. Meanwhile, left-leaning students and faculty have sought to "cancel" figures who violate progressive orthodoxy, disrupt conservative speakers, and restrict teaching material considered harmful to marginalized groups.

Democrats and Republicans are competing to define and advocate for freedom. Isabel V. Sawhill from the Brookings Institution argues that Democrats have substance behind their freedom rhetoric, citing battles over abortion rights and Republicans' changing attitude towards the business community.

Francis Fukuyama highlights the rise of Trumpist populism and the shift of American conservatives towards a more European conservative approach. He argues that the right is using exaggerated fears of the "woke" agenda to justify authoritarian assaults on democracy, but Edsall points out that some voters view liberal policies as infringing on their freedom.

The essay examines homelessness as an ongoing debate over freedom and how progressives historically took the lead in defining and advancing freedom. William Galston argues that progressives lost their command of freedom in the 1960s, allowing conservatives to claim it.

As the 2024 election approaches, both parties have some favorable signs in polls. Republicans can point to Biden's vulnerability and conservative attitudes towards transgender issues. Democrats can highlight the public's opposition to book banning and strict abortion bans. The essay concludes by emphasizing the importance of the election in determining the nation's direction on freedom and liberty, and warns Democrats not to take anything for granted.



Wednesday, May 24, 2023

Using AI to decipher words and sentences from brain scans

Things are happening very fast in AI, as this work from Huth and his collaborators shows. Previous work has shown that speeh articulation and other signals can be decoded from invasive intracranial recordings, and they have developed a language decoder that now accomplishs this with non-invasive fMRI. Motivated readers can obtain the detailed description of the work by emailing me. Their abstract:
A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.

Monday, May 22, 2023

Multilevel cultural evolution - new theory and practical applications

I want to point to a exceptionally lucid and well written expostion by David Sloan Willson et al. and pass on the opening paragraphs of their article that frames the context for understanding how group level selection operates at multiple levels, from cells to societies to the entire earth system. I strongly recommend reading through it slowly and carefully. If that seems a bit much, skip down to the section titled "The earth system as the ultimate unit of selection.'  (added note: for a comprehensive game theoretic analysis of evolutionary dynamics within and among competing groups see Cooney et al.)

Darwin’s theory of evolution is celebrated for its explanatory scope, prompting the geneticist Theodosius Dobzhansky to declare in 1973 that “nothing in biology makes sense except in the light of evolution”. However, what became the “modern synthesis” can also be called the “great constriction.” The study of evolution was confined almost entirely to genetic evolution, relegating the study of human cultural change to other disciplines.
It was not until the 1970s that evolutionary thinkers started to go back to basics by defining natural selection as Darwin did—any process that combines the triad of variation, selection, and replication—irrespective of the proximate mechanisms. The first mathematical models of cultural evolution were based on population genetics models developed 50 y earlier.
Today, the study of cultural evolution in humans and other species is in full swing —and these advances in basic scientific knowledge have practical applications. In this article, we will first review major developments in our basic understanding of human cultural evolution. Then, we will show how they can be applied to a diversity of positive change efforts, no matter what the scale (e.g., from the individual person to global governance) or topic domain. We elaborate for the topics of complex systems science and engineering, economics and business, mental health and well-being, and global change efforts.

Here is the text from the section near the end of the article titled "The earth system as the ultimate unit of selection.' 

The earth system as the ultimate unit of selection.
We have seen that multilevel selection is like a perverse alchemist who turns gold into lead. Self-preservation—a good thing—becomes disruptive selfishness. Helping kith and kin—a good thing—becomes cronyism and nepotism. The welfare of my nation—a good thing—leads to international conflicts. Thriving economies—a good thing—leads to overheating the earth. Nearly everything that is pathological at higher scales can be traced to behaviors that are prosocial at smaller scales.
The only solution to this problem is for policies to be formulated with welfare of the whole-earth system in mind. This is not sufficient by itself, as we will elaborate below, but the basic logic of multilevel selection reveals that it is necessary. There is no invisible hand to permute lower-level interests into higher-level welfare other than our own conscious efforts.
Superficially, it might seem that selection at the planetary scale is impossible because our planet is not competing with any other planets. What makes planet-level selection possible is a decision-making process that makes planetary welfare the target of selection, orients variation around the target, and identifies and replicates better practices, realizing they will be sensitive to context. This is how conscious cultural evolution takes place at smaller scales, as described in the previous sections, and can also take place at the global scale.
The concept of the whole earth as a cooperative system and the primary social identity of an individual was beyond the imagination only a few centuries ago. Nevertheless, when it comes to cultural evolution, the past does not predict the future. Given the myriad forms of globalization that have taken place during the last century, it is difficult not to consider the whole earth as a single system that must transition from CAS2 (“survive”) to CAS1 (“thrive”). Human social groups are nearly always socially constructed. To say “I am first and foremost a human being and citizen of the earth” is no more difficult than to say “I am an American” or “I am a Christian.”
Many people have already adopted a whole-earth ethic, which does manifest as action to a degree—but they do not have a common and authoritative theoretical framework to invoke and from which to derive effective policies. This is in contrast to neoclassical economics and its elaborate mathematical justification of the invisible hand metaphor. Multilevel selection reveals the invisible hand metaphor to be profoundly untrue. It is simply not the case, in economics or any other policy domain, that the lower-level pursuit of self-interest robustly benefits the common good. However, multilevel selection does lead to another, more legitimate conception of the invisible hand metaphor. We must act in two capacities: as designers of whole systems and as participants in the systems that we design. As designers, we must have the welfare of the whole system in mind, which is the opposite of the invisible hand metaphor. As participants, we can indeed respond to our local concerns without having the whole system in mind. Put another way, selection at the level of whole systems is the hand, which winnows the small set of lower-level behaviors that benefit the common good from the much larger set of lower-level behaviors that undermine the common good.

Friday, May 19, 2023

Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age

I want to share with MindBlog readers the background material prepared by Austin Rainbow Forum member Daniel Owen to support our Sunday June 4 2023 discussion on the conflict between the ideals of liberty, equality, and fraternity: 

Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age

The French motto of liberty, equality, and fraternity has symbolized democracy and human rights since the French Revolution. Our U.S. Declaration of Independence declares, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” No mention of fraternity, but in the 1835 political theory classic “Democracy in America,” the French political philosopher Alexis d Tocqueville observed this about American society:

“Americans of all ages, all conditions, all minds constantly unite. Not only do they have commercial and industrial associations in which all take part, but they also have a thousand other kinds: religious, moral, grave, futile, very general and very particular, immense and very small; Americans use associations to give fĂȘtes, to found seminaries, to build inns, to raise churches, to distribute books, to send missionaries to the antipodes; in this manner they create hospitals, prisons, schools. Finally, if it is a question of bringing to light a truth or developing a sentiment with the support of a great example, they associate. Everywhere that, at the head of a new undertaking, you see the government in France and a great lord in England, count on it that you will perceive an association in the United States.”

Perhaps the ideal of fraternity as civic-mindedness was just part of the character of early American culture. Is it still, or have we lost that?

Earlier this year, I listened to a talk by a Unitarian minister who suggested that liberty, equality, and fraternity are like a three-legged stool. If one of the legs becomes too long or too short, the stool is unbalanced. He thought our culture was out of balance with too much emphasis on individual liberty at the expense of equality and fraternity. Agree? Disagree?

How do we find a balance?

How do we deal with differences between the political left and right regarding what form these ideals should take?

How relevant are these ideals today in a globalized world dominated by neoliberalism?

How can we reconcile “We hold these truths to be self-evident…” with the relativism of postmodernism, where such declarations may be seen as socially constructed metanarratives used to advance the power and interests of some groups at the expense of others?

One possible response is to rethink what these principles mean in the 21st century. Liberty does not have to mean unlimited freedom to pursue one’s own agenda. It can also mean freedom to participate in democratic decision-making, to express one's identity and culture, and to access education, health care, and other public goods. Equality does not have to mean uniformity or conformity. It can also mean respect for diversity, human rights, and social justice. Fraternity does not have to mean exclusion or nationalism. It can also mean solidarity, empathy, and mutual aid among people of different backgrounds, beliefs, and interests.

Thoughts?

Recommended reading/listening:

When Their Idea of Liberty is Your Idea of Death (12-minute read)

What It Means to Be Woke (5-minute read)

What Is Postmodernism? (12-minute video)

Extra Credit: Why We're Still Postmodern (56-minute podcast.
A lively discussion of postmodernism, neoliberalism, and contemporary culture from a very different angle than the previous video. Worth your time)

Wednesday, May 17, 2023

An evolutionary psychology perspective on the politics of leadership

Thomas Edsall does another of his scholarly analyses, this time putting Trump as the wannabe alpha male in a larger evolutionary context. I started this post with a number of clips from Edsall's text, but then decided to have another go at asking ChatGPT 4 to do a summary for me. In response to my request "Please condense and summarize the following essay by Thomas Edsall" it did a quite credible job, and I'm going to be lazy and pass on what the bot did!

Thomas Edsall's essay discusses how Donald Trump's alpha-male image conflicts with the barrage of criminal investigations and civil suits he faces, threatening his self-portrayal and traditional leadership standards. Trump's bid for dominance never garnered majority support, yet he still poses a serious threat to American democracy.

The essay delves into the evolutionary and neuroendocrine foundations of human aggression, highlighting the crucial role of coalition formation in overcoming coercive alpha males. Early sub-elite males formed coalitions to overthrow alpha males who violated social norms, paving the way for a more egalitarian society. This development has been traced to the roots of democracy, as it eventually led to the state's acquisition of police power and the legal use of force to enforce norms and laws.

The criminal investigations and civil suits against Trump represent America's democratic system of government, attempting to constrain a deregulated alpha-male wannabe. Over time, complex systems have evolved to limit the power of leaders, like coalitions, power-sharing agreements, parliaments, and constitutions. However, the dynamic of dominance persists, as voters often favor taller candidates and crave strong leaders.

Prof. Rose McDermott explains the process of self-domestication in human societies, where beta and gamma males work together to unseat coercive alpha males who exploit the community, leading to more egalitarian dynamics. McDermott views former President Trump as an example of a coercive alpha male, and suggests that the polarization in the U.S. has prevented the formation of coalitions strong enough to oppose him.

Some academics disagree with this biological explanation for modern social behavior, such as Prof. John Horgan, who finds it deterministic and promoting fatalism, and Prof. R. Brian Ferguson, who disputes the idea of alphas facing death due to sub-alpha elite coalitions. On the other hand, Prof. Dan McAdams argues that Trump's personality and authoritarian dynamic align with an older, evolutionarily-driven paradigm of dominance.

Prof. Kevin Smith attributes the rise of coercive alpha males and other unprincipled personalities in politics to the weakening of democratic norms, pointing out that these norms are difficult to institutionalize and easy to destroy. Once gone, they may be difficult to re-establish, leaving the political system vulnerable to demagogues and tyrants.

Monday, May 15, 2023

People who talk too much

I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World."  The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges: 

-When possible, say nothing

-Master the power of the pause

-Quit social media

-Seek out silence

-Learn how to listen

Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial.  (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)

Friday, May 12, 2023

Virality

This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the  seventh section of her book, titled "Virality"

The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our  human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.

The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.

When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
            
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.

The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.

Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”

If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.

If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.”  She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”


Wednesday, May 10, 2023

MindBlog paragraphs bloviating on the nature of the self ask Google Bard and Chat GPT 4 for help

I can’t resist passing on this example of checking out what Google Bard and OpenAI’s Chat GPT-4 might do with some of my opaque paragraphs.  I asked them to clarify and improve the following two paragraphs, which they did (Microsoft Bing again proved to be useless). 

My paragraphs:

There can be a calm in the understanding that our transient Anthropocene era is a brief blip in the history of this planet and the cosmos.  A disappearance of Homo Sapiens as we know it seems a certainty, a transition either to extinction or to a further evolved trans-human form.  It doesn’t seem useful to become agitated about this, but rather live fully in the present  moment as grateful witnesses to what is unfolding.

There is an implicit ontology and epistemology of the experienced self underlying  underlying this ‘pay attention in the present’ stance…an implicit assumption of the nature of the ‘what it is’ that is the resident experienced self. The ontological assumption is that this ‘what it is’ is an emergent property of processes that can not be known - there is no there there. The epistemological assumption is that everything we experience - all knowledge, thought and feeling - are illusions that can be known only with respect to each other.  

Here is Chat GPT 4’s concise effort, a slightly edited version of its second regenerated response, which was a bit better than its first version:

There can be a sense of serenity in acknowledging that our fleeting Anthropocene epoch represents just a brief moment in the vast history of our planet and the cosmos. The eventual transformation or disappearance of Homo sapiens as we know it is inevitable, whether through extinction or evolution into a more advanced transhuman form. Becoming overly concerned about this outcome is not particularly helpful; instead, we should fully embrace the present moment and appreciate the unfolding events as grateful observers.

An implicit ontology and epistemology of the experienced self underlie this 'live in the present' approach, suggesting certain assumptions about the nature of the self that we experience. The ontological assumption posits that the self is an emergent property of processes that cannot be truly understood, signifying that there is no fixed essence to be found. The epistemological assumption claims that all of our experiences, including knowledge, thoughts, and feelings, are illusory and can only be understood in relation to one another.

And here is Google Bard’s longer, and also excellent version:

The Anthropocene era is a brief blip in the history of this planet and the cosmos. It is certain that Homo Sapiens will eventually disappear, either through extinction or through evolution into a new form. It is therefore not useful to become agitated about this, but rather to live fully in the present moment as grateful witnesses to what is unfolding.

This "pay attention to the present" stance is based on an implicit ontology and epistemology of the experienced self. The ontological assumption is that the experienced self is an emergent property of processes that cannot be known directly. In other words, the self is not a thing, but rather a process that emerges from the interaction of many different factors, such as our genes, our environment, and our experiences. The epistemological assumption is that all knowledge, thought, and feeling are illusions that can only be known with respect to each other. In other words, we can never know anything directly, but only through the lens of our own subjective experience.

This view of the self has several implications. First, it means that we should not take ourselves too seriously. We are simply a product of our genes, our environment, and our experiences. We are not in control of our own destiny, and we should not try to be. Second, it means that we should not be afraid of change. The self is constantly changing, and this is a good thing. It allows us to grow and evolve. Third, it means that we should be grateful for the present moment. The present moment is all that we have, and it is the only moment that we can truly experience.

Monday, May 08, 2023

Can a theory of events-in-time - our future history - ever be crafted?

I first drafted this as I usually do to pass on clips that attempt to extract the core points from a fascinating brief piece by Venkatesh Rao.  I then decided to ask Chat GPT 4, Google's bard and Microsoft Bing to "Please condense the core ideas in the following document into a few paragraphs:"  The response from Chat GPT 4 was amazing, so I paste it in here before giving you my clips :

Venkatesh Rao explores the idea of the "End of History" as an analogy to the "End of Biology," where biology transitioned from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter. The End of History could signify a move away from vitalist understandings of the logic of events-in-time. Rao acknowledges that the End of History theory is not widely popular due to its non-constructive nature, as it subverts vitalist understanding of events without providing a parsimonious and generative alternative.

Rao suggests that deeper logics of events-in-time exist in nature, with biological evolution and thermodynamics offering more complex explanations than a mere "one damn thing after another" perspective. He proposes that events-in-time might be viewed as the trace of an irreversible, non-ergodic computation process, with long-term collective memory playing a significant role. History, as we have understood it so far, might be considered a particular memory architecture with a specific irreversible and non-ergodic structure to the evolving computation. This perspective could potentially pave the way for a more refined and deliberate understanding of future events, reducing events-in-time after the End of History to a specialized kind of computation.

Here are my clips that Chat GPT 4 used to generate the above condensation:

What if the Hegel-Fukuyama idea of the End of History is something like Darwin’s idea of natural selection?

The Darwinian century began with The Origin of the Species in 1859 and ended with the discovery of the structure of DNA in 1953. Humanity experienced an End of Biology moment somewhere between those bookend events…a demotion of the discipline from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter…Biology went from being an inscrutable aspect of providence to an emerging engineering discipline, subservient to physics and mathematics by way of chemistry.

By analogy, the End of History moment is something like an end to vitalist understandings of the logic of events-in-time…There is no role for divine agency, and no justification for assigning a particular positive or negative valence to apparent secular tendencies in the stream of events…The fact that the theory is historicist without being normative is perhaps what makes it so powerfully subversive. The End of History theory is the historicism that kills all other historicisms. Past the End of History, notions like progress must be regarded as analogous to notions like Ă©lan vital past the End of Biology. …it is undeniable that 30 years in, the End of History theory is still not particularly popular…One obvious reason is that it is non-constructive. It subverts a vitalist understanding of events in time without supplying a more parsimonious and generative alternative.

In Fukuyama’s theory, there are no notions comparable to variation and natural selection that allow us to continue making sense of events-in-time. There are no Mendelian clues pointing to something like a genetics of events-in-time. There is no latent Asimovian psychohistorical technology lurking in the details of the End of History theory…Perhaps one damn thing after another is where our understanding of events in time ought to end, for our own good.

I think this is too pessimistic though. Deeper logics of events-in-time abound in nature. Even biological evolution and thermodynamics, which are more elemental process aspects of reality, admit more than a one damn thing after another reading. History, as a narrower class of supervening phenomena that must respect the grammars of both, ought to admit more interesting readings, based on broadly explanatory laws that are consistent with both, but more specific than either. Dawkins’ memetic view of cultural evolution, and various flavors of social darwinism, constitute first-order attempts at such laws. Some flavors of cosmism and transhumanism constitute more complex attempts that offer hope of wresting ever-greater agency from the universe.

So what does explain the logic of events-in-time in a way that allows us to make sense of events-in-time past the End of History, in a way that improves upon a useless one damn thing after another sense of it, and says something more than the laws of evolution or thermodynamics?

I don’t have an answer, but I have a promising clue: somehow, events-in-time must be viewed as the trace of an irreversible, non-ergodic computation process, in which long-term collective memory plays a significant role.

History, as we have understood it so far, is something like a particular memory architecture that assumes a particular irreversible and non-ergodic structure to the evolving computation. The contingency and path dependence of events-in-time in human affairs is no reason to believe there cannot also be theoretical richness within the specificity. A richness that might open up futures that can be finely crafted with a psychohistorical deliberateness, rather than simply vaguely anticipated and crudely shaped. 

Perhaps, just as life after the End of Biology was reduced to a specialized kind of chemistry, events-in-time, after the End of History, can be reduced to a specialized kind of computation.


Friday, May 05, 2023

The Data Deluge - Dataism

This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the  sixth section of her book, titled "Algorithm."

Chapter 11  

In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.

What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.

These mathematical tools can predict and understand the world more adequately than any theory could.  Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...

Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”

Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
            
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.

Chapter 12

Here are a smaller number of clips from the last section of Chapter 12,  on the errors of algorithms.   

It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.

It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.

For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
            
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
            
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.

This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.

 

Thursday, May 04, 2023

Yuval Harari's vision of the end of human history.

I passed on a link to Harai's recent piece in The Economist in Tuesday's post, but I haven't been able to get some of his crystal clear thinking out of my mind, and so I'm putting a few clips of text in this post for MindBlog readers, and also because MindBlog serves as my personal archive of ideas I don't want to forget. The article is about AI hacking the operating system of human civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures...Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it...What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?
What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.
...since ancient times humans have feared being trapped in a world of illusions...In the 17th century RenĂ© Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality. In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of AI tools in the public sphere, and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone, and I cannot tell whether it is a human or an AI—that’s the end of democracy.
This text has been generated by a human.
Or has it?

Wednesday, May 03, 2023

MindBlog points to books on regulating attention and healthy longevity.

I want to follow up on my post on Nestor’s book on breathing by pointing MindBlog readers to two other ‘self help’ books I have read through recently and found to be both engaging and annoying: Johann Hari’s “Stolen Focus” and Peter Attia’s “Outlive.” They are both worth selectively reading through. Like Nestor’s book, they radiate the energy characteristic of male overachievers close to age 50 who have reached a professional peak in their advocacy of causes they have focused on: regulating attention and living for longevity. I got a bit impatient with how much of their personal stories I had to wade through to get to interesting facts and information presented. I guess that was meant to make the text more reader friendly, but left me feeling less friendly towards the authors.I also found it instructive to have a look at the different Wikipedia pages for Nestor, Hari, and Attia.

Tuesday, May 02, 2023

Keeping up with debate and commentary on A.I.

.... is an impossible task, but I thought I would pass on links to three articles from yesterday's New York Times on ongoing debate, and that give some nice examples of AI bots 'hallucinating,"   that is, completely fabricating events that didn't happen with great plausibility. Great caution is advised when asking a bot to answer a question that you don't already know much of the answer to...

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
When A.I. Chatbots Hallucinate
What Exactly Are the Dangers Posed by A.I.?

 Added note:  soon after the above was posted I came across this piece in The Economist by Yuval Harari:

AI has hacked the operating system of human civilisation

Monday, May 01, 2023

Panpsychism and Metonymy

This post is the seventh installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 9 and 10 from the  fifth section of her book,  titled "Metonymy"

Chapter 9

Panpsychism has surfaced from time to time over the centuries, as in the philosophy of Bertrand Russell and Arthur Eddington, who realized that the two most notable “gaps” in physicalism—the problem of consciousness and the “problem of intrinsic natures” (the question of what matter is)—could be solved in one fell swoop. Physics could not tell us what matter was made out of, and nobody could understand what consciousness was, so maybe consciousness was, in fact, the fundamental nature of all matter. Mental states were the intrinsic nature of physical states…The impasse surrounding the hard problem of consciousness and the weirdness of the quantum world has created a new openness to the notion that the mind should have never been excluded from the physical sciences in the first place.

Some neuroscientists have arrived at panpsychism not through philosophy but via information theory. One of the leading contemporary theories of consciousness is integrated information theory, or IIT. Pioneered by Giulio Tononi and Christof Koch…IIT holds that consciousness is bound up with the way that information is “integrated” in the brain. Information is considered integrated when it cannot be easily localized but instead relies on highly complex connections across different regions of the brain…They have come up with a specific number, Ί, or phi, which they believe is a threshold and is designed to measure the interdependence of different parts of a system…many other creatures have a nonzero level of phi, which means that they too are conscious—as are atoms, quarks, and some single-celled organisms…Unlike emergentism and other systems theories that cleverly redefine terms like “consciousness” and “cognition” so that they apply to forests and “insect colonies, panpsychists believe that these entities truly possess some kind of phenomenal experience—that it feels like something to be a mouse, an amoeba, or a quark…Although the theory is still a minority position within academia, there is undoubtedly more openness today to theories that upturn modern orthodoxies to extend consciousness down the chain of being.  

“While popular debates about the theory rarely extend beyond the plausibility of granting consciousness to bees and trees, it contains far more radical implications. To claim that reality itself is mental is to acknowledge that there exists no clear boundary between the subjective mind and the objective world. When Bacon denounced our tendency to project inner longings onto scientific theories, he took it for granted—as most of us do today—that the mind is not part of the physical world, that meaning is an immaterial idea that does not belong to objective reality. But if consciousness is the ultimate substrate of everything, these distinctions become blurred, if not totally irrelevant. It’s possible that there exists a symmetry between our interior lives and the world at large, that the relationship between them is not one of paradox but of metonymy—the mind serving as a microcosm of the world’s macroscopic consciousness. Perhaps it is not even a terrible leap to wonder whether the universe can communicate with us, whether life is full of “correspondences,” as the spiritualists called them, between ourselves and the transcendent realm—whether, to quote Emerson, “the whole of nature is a metaphor of the human mind.

Although integrated information theory is rooted in longstanding analogies between the brain and digital technologies, it remains uncertain whether the framework allows for machine consciousness. Koch argues that nothing in ITT necessitates that consciousness is unique to organic forms of life… So long as a system meets the minimum requirements of integrated information, it could in principle become conscious, regardless of whether it’s made of silicon or brain tissue. However, most digital computers have sparse and fragmented connectivity that doesn’t allow for a high level of integration.  

One of the central problems in panpsychism is the “combination problem.” This is the challenge of explaining how conscious microsystems give way to larger systems of unified consciousness. If neurons are conscious—and according to Koch they have enough phi for “an itsy-bitsy amount of experience”—and my brain is made of billions of neurons, then why do I have only one mind and not billions? Koch’s answer is that a system can be conscious only so long as it does not contain and is not contained within something with a higher level of integration. While individual neurons cultured in a petri dish might be conscious, the neurons in an actual brain are not, because they are subsumed within a more highly integrated system...This is why humans are conscious while society as a whole is not. Although society is the larger conglomerate, it is less integrated than the human brain, which is why humans do not become swallowed up in the collective consciousness the way that neurons do.

It is, however, undeniable that society is becoming more and more integrated. Goff pointed out recently that if IIT is correct, then social connectivity is a serious existential threat. Assuming that the internet reaches a point where its information is more highly integrated than that of the human brain, it would become conscious, while all our individual human brains would become absorbed into the collective mind. “Brains would cease to be conscious in their own right,” Goff writes, “and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.” Goff likens this scenario to the visions of Pierre Teilhard de Chardin, the French Jesuit priest who, as we’ve seen, prophesied the coming Omega Point and inspired aspects of transhumanism. Once humanity is sufficiently connected via our information technologies, Teilhard predicted, we will all fuse into a single universal mind—the noosphere—enacting the Kingdom of Heaven that Christ promised.
           
This is already happening, of course, at a pace that is largely imperceptible - in the speed with which ideas go viral, cascading across social platforms, such that the users who share them begin to seem less like agents than as hosts, nodes in the enormous brain…in the efficiency of consensus, the speed with which opinions fuse and solidify alongside the news cycle, like thought coalescing in the collective consciousness. We have terms that attempt to catalogue this merging—the “hive mind,” “groupthink” -  times when I become aware of my own blurred boundaries, seized by the suspicion that I am not forming new opinions so much as assimilating them…I don’t know what to call this state of affairs, but it does not feel like the Kingdom of God.


Chapter 10

From the end of the chapter:

 “Idealism and panpsychism are appealing in that they offer a way of believing once again in the mind—not as an illusion or an epiphenomenon but as a feature of our world that is as real as anything else. But its proponents rarely stop there. In some cases they go on to make the larger claim that there must therefore exist some essential symmetry between the mind and the world, that the patterns we observe in our interior lives correspond to a more expansive, transcendent truth. Proponents of these theories occasionally appeal to quantum physics to argue that the mind-matter dichotomy is false—clearly there exists some mysterious relationship between the two. But one could just as easily argue that physics has, on the contrary, affirmed this chasm, demonstrating that the world at its most fundamental level is radically other than ourselves—that the universe is, as Erwin Schrödinger put it, “not even thinkable.”

This is precisely the modern tension that Arendt calls attention to in The Human Condition. On the one hand, the appearance of order in the world—the elegance of physical laws, the usefulness of mathematics—tempts us to believe that our mind is made is made in its image, that “the same patterns rule the macrocosm and the microcosm alike.” In the enchanted world order was seen as proof of eternal unity, evidence that God was present in all things, but for the modern person this symmetry leads inevitably back to Cartesian doubt—the suspicion that the order perceived stems from some mental deception. We have good reason to entertain such suspicions, Arendt argues. Since Copernicus and Galileo, science has overturned the most basic assumptions about reality and suggested that our sensory perception is unreliable. This conclusion became unavoidable with the discovery of general relativity and quantum physics, which suggest that “causality, necessity, and lawfulness are categories inherent in the human brain and applicable only to the common-sense experiences of earthbound creatures.” We keep trying to reclaim the Archimedean point, hoping that science will allow us to transcend the prison of our perception and see the world objectively. But the world that science reveals is so alien and bizarre that whenever we try to look beyond our human vantage point, we are confronted with our own reflection. “It is really as though we were in the hands of an evil spirit,” Arendt writes, alluding to Descartes’s thought experiment, “who mocks us and frustrates our thirst for knowledge, so that whenever we search for that which we are not, we encounter only the “patterns of our own minds.”
           
That is not to say that the Archimedean point is no longer possible.  In her 1963 essay “The Conquest of Space and the Stature of Man,” Arendt considers this modern problem in light of emerging technologies. The oddest thing, she notes, is that even though our theories about the world are limited and simplistic and probably wrong, they “work” when implemented into technologies. Despite the fact that nobody understands what quantum mechanics is telling us about the world, the entire computer “age—including every semiconductor, starting with the very first transistor, built in 1947—has rested on well-modeled quantum behavior and reliable quantum equations. The problem is not merely that we cannot understand the world but that we can now build this alien logic into our devices. There are some scientists, Arendt notes, who claim that computers can do “what a human brain cannot comprehend.” Her italics are instructive: it’s not merely that computers can transcend us in sheer brain power—solving theorems faster than we can, finding solutions more efficiently—but that they can actually understand the world in a way that we cannot. She found this proposition especially alarming. “If it should be true…that we are surrounded by machines whose doings we cannot comprehend although we have devised and constructed them,” she writes, “it would mean that the theoretical perplexities of the natural sciences on the highest level have invaded our everyday world.” This conclusion was remarkably prescient.”


 

Friday, April 28, 2023

Are we living in a simulated world?

This post is the sixth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 8 from the  fourth section of her book,  titled "Paradox."

Bostrom, a prominent transhumanist, believes that humanity is in the process of becoming posthuman as we merge our bodies with technology. We are becoming superintelligence ourselves. His simulation hypothesis begins by imagining a future, many generations from now, when posthumans have achieved an almost godlike mastery over the world. One of the things these posthumans might do, Bostrom proposes, is create simulations—digital environments that contain entire worlds…The inhabitants will not know that they are living in a simulation but will believe their world is all that exists…the theory’s popularity has escalated over the past decade or so. It has gained an especially fervent following among scientists and Silicon Valley luminaries, including Neil deGrasse Tyson and Elon Musk, who have come out as proponents…It has become, in other words, the twenty-first century’s favored variation on Descartes’s skeptical thought experiment—the proposition that our minds are lying to us, that the world is radically other than it seems.

…for all its supposed “naturalism,” the simulation hypothesis is ultimately an argument from design. It belongs to a long lineage of creationist rhetoric that invoke human technologies to argue that the universe could not have come about without the conscious intention of a designer.,.Bostrom acknowledged in his paper that there were “some loose analogies” that could be drawn between the simulation hypothesis and traditional religious concepts. The programmers who created the simulation would be like gods compared to those of us within the simulation.

One of the common objections to the informational universe is that information cannot be “ungrounded,” without a material instantiation. Claude Shannon, the father of information theory, insisted that information had to exist in some kind of physical medium, like computer hardware…if the universe were an enormous computer, then this information would in fact be instantiated on something material, akin to a hard drive. We wouldn’t be able to see or detect it because it would exist in the universe of the programmers who built it. All we would notice was its higher-level structure, the abstract patterns and laws that were part of its software. The simulation hypothesis, in other words, could explain why our universe is imbued with discernible patterns and mathematical regularities while also explaining how those patterns could be rooted in something more than mere abstractions. Perhaps Galileo was not so far off when he imagined the universe as a book written by God in the language of mathematics. The universe was software written by programmers in the binary language of code…“if you take this thesis to its conclusion, it doesn’t really explain anything about the universe or its origins. Presumably there is still some original basement-level reality at its foundation—there could be no true infinite regress—occupied by first posthumans who created the very first technological simulation. But these posthumans were just our descendants—or the descendants of some other species that had evolved on another planet—and so the question about origins remained unchanged, only pushed back one degree. Where did the universe originally come from?

Bohr …observed that humans are incapable of understanding the world beyond “our necessarily prejudiced conceptual frame.” And perhaps it can explain why the multiverse theory and other attempts to transcend our anthropocentric outlook so seem a form of bad faith, guilty of the very hubris they claim to reject. There is no Archimedean point, no purely objective vista that allows us to transcend our human interests and see the world from above, as we once imagined it appeared to God. It is our distinctive vantage that binds us to the world and sets the necessary limitations that are required to make sense of it. This is true, of course, regardless of which interpretation of physics is ultimately correct. It was Max Planck, the physicist who struggled more than any other pioneer of quantum theory to accept the loss of a purely objective worldview, who acknowledged that the central problems of physics have always been reflexive. “Science cannot solve the ultimate mystery of nature,” he wrote in 1932. “And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.

 

Wednesday, April 26, 2023

Is the mind a reliable mirror of reality? The marriage of physics and information theory

 This post is the fifth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 7 from the  fourth section of her book,  titled "Paradox."

Is the mind a reliable mirror of reality? Do the patterns we perceive belong to the objective world, or are they merely features of our subjective experience? Given that physics was founded on the separation of mind and matter, subject and object, it’s unsurprising that two irreconcilable positions that attempt to answer this question have emerged: one that favors subjectivity, the other objectivity. Bohr’s view was that quantum physics describes our subjective experience of the world; it can tell us only about what we observe. Mathematical equations like the wave function are merely metaphors that translate this bizarre world into the language of our perceptual interface—or, to borrow Kant’s analogy, spectacles that allow us to see the chaotic world in a way that makes sense to our human minds. Other interpretations of physics, like the multiverse theory or string theory, regard physics not as a language we invented but as a description of the real, objective world that exists out there, independent of us. Proponents of this view tend to view equations and physical laws as similarly transcendent, corresponding to literal, or perhaps even Platonic, realities.

The marriage of physics and information theory is often attributed to John Wheeler, the theoretical physicist who pioneered, with Bohr, the basic principles of nuclear fission. In the late 1980s, Wheeler realized that the quantum world behaved a lot like computer code. An electron collapsed into either a particle or a wave depending on how we interrogated it. This was not dissimilar from the way all messages can be simplified into “binary units,” or bits, which are represented by zeros and ones. Claude Shannon, the father of information theory, had defined information as “the resolution of uncertainty,” which seemed to mirror the way quantum systems existed as probabilities that collapsed into one of two states. For Wheeler these two fields were not merely analogous but ontologically identical. In 1989 he declared that “all things physical are information-theoretic in origin.            
            
In a way Wheeler was exploiting a rarely acknowledged problem that lies at the heart of physics: it’s uncertain what matter actually is. Materialism, it is often said, is not merely an ontology but a metaphysics—an attempt to describe the true nature of things. What materialism says about our world is that matter is all that exists: everything is made of it, and nothing exists outside of it. And yet, ask a physicist to describe an electron or a quark, and he will speak only of its properties, its position, its behavior—never its essence.

Wheeler’s answer was that matter itself does not exist. It is an illusion that arises from the mathematical structures that undergird everything, a cosmic form of information processing. Each time we make a measurement we are creating new information—we are, in a sense, creating reality itself. Wheeler called this the “participatory universe,” a term that is often misunderstood as having mystical “connotations, as though the mind has some kind of spooky ability to generate objects. But Wheeler did not even believe that consciousness existed. For him, the mind itself was nothing but information. When we interacted with the world, the code of our minds manipulated the code of the universe, so to speak. It was a purely quantitative process, the same sort of mathematical exchange that might take place between two machines.            

While this theory explains, or attempts to explain, how the mind is able to interact with matter, it is a somewhat evasive solution to the mind-body problem, a sleight of hand that discards the original dichotomy by positing a third substance—information—that can explain both. It is difficult, in fact, to do justice to how entangled and self-referential these two fields—information theory and physics—have become, especially when one considers their history. The reason that cybernetics privileged relationships over content in the first place was so that it could explain things like consciousness purely in terms of classical physics, which is limited to describing behavior but not essence—“doing” but not “being.” When Wheeler merged information theory with quantum physics, he was essentially closing the circle, proposing that the hole in the material worldview—intrinsic essence—could be explained by information itself.

Seth Lloyd, an MIT professor who specializes in quantum information, insists that the universe is not like a computer but is in fact a computer. “The universe is a physical system that contains and processes information in a systematic fashion,” he argues, “and that can do everything a computer can do.” Proponents of this view often point out that recent observational data seems to confirm it. Space-time, it turns out, is not smooth and continuous, as Einstein’s general relativity theory assumed, but more like a grid made up of minuscule bits—tiny grains of information that are not unlike the pixels of an enormous screen. Although we experience the world in three dimensions, it seems increasingly likely that all the information in the universe arises from a two-dimensional field, much like the way holograms work, or 3-D films.
            
When I say that I try very hard to avoid the speculative fringe of physics, this is more or less what I am talking about. The problem, though, is that once you’ve encountered these theories it is difficult to forget them, and the slightest provocation can pull you back in. It happened a couple years ago, while watching my teenage cousin play video games at a family gathering. I was relaxed and a little bored and began thinking about the landscape of the game, the trees and the mountains that made up the backdrop. The first-person perspective makes it seem like you’re immersed in a world that is holistic and complete, a landscape that extends far beyond the frame, though in truth each object is generated as needed. Move to the right and a tree is “generated; move to the left and a bridge appears, creating the illusion that it was there all along. What happened to these trees and rocks and mountains when the player wasn’t looking? They disappeared—or no, they were never there to begin with; they were just a line of code. Wasn’t this essentially how the observer effect worked? The world remained in limbo, a potentiality, until the observer appeared and it was compelled to generate something solid. Rizwan Virk, a video game programmer, notes that a core mantra in programming is “only render that which is being observed.”
            
Couldn’t the whole canon of quantum weirdness be explained by this logic? Software programs are never perfect. Programmers cut corners for efficiency—they are working, after all, with finite computing power; even the most detailed systems contain areas that are fuzzy, not fully sketched out. Maybe quantum indeterminacy simply reveals that we’ve reached the limits of the interface. The philosopher Slavoj ĆœiĆŸek once made a joke to this effect. Perhaps, he mused, God got a little lazy when he was creating the universe, like the video game programmer who doesn’t bother to meticulously work out the interior of a house that[ “the player is not meant to enter. “He stopped at a subatomic level,” he said, “because he thought humans would be too stupid to progress so far.”

Monday, April 24, 2023

Networks and Emergentism

This post is the fourth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Chapters 5 and 6 form the third section of her book,  titled "Network."

From Chapter 5:

When it comes to biological systems like forests and swarms, emergent behavior that appears to be unified and intelligent can exist without a centralized control system like a brain. But the theory has also been applied to the brain itself, as a way to account for human consciousness. Although most people tend to think of the brain as the body’s central processing unit, the organ itself has no central control. Philosophers and neuroscientists often point out that our belief in a unified interior self—the illusion, as Richard Dawkins once put it, that we are “a unit, not a colony”—has no basis in the actual architecture of the brain. Instead there are only millions of unconscious parts that conspire, much like a bee colony, to create a “system” that is intelligent. Emergentism often entails that consciousness isn’t just in the head; it emerges from the complex relationships that exist throughout the body, and also from the interactions between the body and its environment.

Although emergentism is rooted in physicalism, critics have often claimed that there is something inherently mystical about the theory, particularly when these higher-level patterns are said to be capable of controlling or directing physical processes...few emergentists have managed to articulate precisely what kind of structure might produce consciousness in machines; in some cases the mind is posited simply as a property of “complexity,” a term that is eminently vague. Some critics have argued that emergentism is just an updated version of vitalism—the ancient notion that the world is animated by a life force or energy that permeates all things…Although emergentism is focused specifically on consciousness, as opposed to life itself, the theory is vulnerable to the same criticism that has long haunted vitalism: it is an attempt to get “something from nothing.” It hypothesizes some additional, invisible power that exists within the mechanism, like a ghost in the machine.

…emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity. At some point nature will step in and finish the job…aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself.

From Chapter 6:

…once the world was a sacred and holy place, full of chatty and companionable objects—rocks and trees that were capable of communicating with us—we now live in a world that has been rendered mute… some disenchantment narratives place the fall from grace not with the Enlightenment and the rise of modern science but with the emergence of monotheism. The very notion of imago dei, with humanity created in God’s image and given “dominion” over creation, has linked human exceptionalism with the degradation of the natural world.  Is it possible to go back? Or are these narratives embedded so deeply in the DNA of our ontological assumptions that a return is impossible? This is especially difficult when it comes to our efforts to create life from ordinary matter…In the orthodox forms of Judaism and Christianity, the ability to summon life from inert matter is denounced as paganism, witchcraft, or idolatry.

Just as the golems were sculpted out of mud and animated with a magical incantation, so the hope persists that robots built from material parts will become inhabited by that divine breath…While these mystical overtones should not discredit emergence as such—it is a useful enough way to describe complex systems like beehives and climates—the notion that consciousness can emerge from machines does seem to be a form of wishful thinking, if only because digital technologies were built on the assumption that consciousness played no role in the process of intelligence. Just as it is somewhat fanciful to believe that science can explain consciousness when modern science itself was founded on the exclusion of the mind, it is difficult to believe that technologies designed specifically to elide the notion of the conscious subject could possibly come to develop an interior life.
           
To dismiss emergentism as sheer magic is to ignore the specific ways in which it differs from the folklore of the golems—even as it superficially satisfies the same desire. Scratch beneath the mystical surface and it becomes clear that emergentism is often not so different from the most reductive forms of materialism, particularly when it comes to the question of human consciousness. Plant intelligence has been called a form of “mindless mastery,” and most emergentists view humans as similarly mindless. We are not rational agents but an encasement of competing systems that lack any sort of unity or agency. Minsky once described the mind as “a sort of tangled-up bureaucracy” whose parts remain ignorant of one another.

Just as the intelligence of a beehive or a traffic jam resides in the patterns of these inert, intersecting parts, so human consciousness is merely the abstract relationships that emerge out of these systems: once you get to the lowest level of intelligence, you inevitably find, as Minsky put it, agents that “cannot think at all.” There is no place in this model for what we typically think of as interior experience, or the self.

Embodied artificial intelligence is being pursued in laboratories using humanoid robots equipped with sensors and cameras that endow the robots with sensory functions and motor skills. The theory is that these sensorimotor capacities will eventually lead to more advanced cognitive skills, such as a sense of self or the ability to use language, though so far this has not happened.