In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing’s machine, the American–Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Wednesday, June 21, 2023
Turing, von Neumann, and the computational architecture of biological machines
Tuesday, June 20, 2023
The psychological illusion of "The Good Old Days"
Anecdotal evidence indicates that people believe that morality is declining. In a series of studies using both archival and original data (n = 12,492,983), we show that people in at least 60 nations around the world believe that morality is declining, that they have believed this for at least 70 years and that they attribute this decline both to the decreasing morality of individuals as they age and to the decreasing morality of successive generations. Next, we show that people’s reports of the morality of their contemporaries have not declined over time, suggesting that the perception of moral decline is an illusion. Finally, we show how a simple mechanism based on two well-established psychological phenomena (biased exposure to information and biased memory for information) can produce an illusion of moral decline, and we report studies that confirm two of its predictions about the circumstances under which the perception of moral decline is attenuated, eliminated or reversed (that is, when respondents are asked about the morality of people they know well or people who lived before the respondent was born). Together, our studies show that the perception of moral decline is pervasive, perdurable, unfounded and easily produced. This illusion has implications for research on the misallocation of scarce resources, the underuse of social support and social influence.
Saturday, June 10, 2023
How do we define spirituality and what it’s role is in our daily lives?
This post is primarily directed to Austin TX people who will be attending the Sunday 2 p.m. July 2 meeting of the Austin Rainbow Forum that is now in its 5th year. We will be discussing the topic of what spirituality means to each of us, and what its role in our lives is. Look below to see how the three major AI chatbots responded to my request for some guidance on how to moderate such a discussion. Also I'll pass on this youtube video titled "What To Remember To Be Happy" pointed to by Rainbow Forum discussant David Soileau.
Question to the chat bots:
I have to moderate a discussion session on the topic of spirituality in our daily lives. What advice would you give me? How would you define spirituality?
Response from ChatGPT 4:
That sounds like an interesting topic, Deric! Here are some suggestions for leading a discussion on spirituality in daily life:
Start by defining what spirituality means to you and how you understand it. This will help participants understand the context of the discussion and provide a starting point for exploring the topic.
Encourage participants to share their own experiences of spirituality in their daily lives. This can include practices such as meditation, prayer, or simply taking time for reflection and introspection.
Explore the ways in which spirituality can help us cope with the challenges and stresses of everyday life. For example, some people find that spiritual practices help them find meaning and purpose in difficult situations or provide a sense of inner peace and calm.
Discuss the role that community and social connections can play in spirituality. Many people find that sharing their spiritual journey with others and being part of a supportive community is an important aspect of their spiritual practice.
Finally, encourage participants to think about how they can incorporate spirituality into their daily lives in practical ways. This might involve setting aside time each day for meditation or prayer, making a conscious effort to be more mindful and present in their interactions with others, or finding ways to give back to their community in a spiritually meaningful way.
As for the definition of spirituality, it can mean different things to different people. Generally, spirituality refers to a sense of connection to something larger than oneself, whether that be a higher power, the natural world, or a broader sense of humanity. It often involves a search for meaning, purpose, and transcendence beyond the material world.
Response from Google Bard:
Spirituality is a broad concept that can be defined in many ways. For some people, it is a belief in a higher power or a connection to something larger than themselves. For others, it is a sense of inner peace or well-being. Spirituality can also be expressed through religious practices, meditation, or simply spending time in nature.
When leading a discussion on spirituality, it is important to be respectful of all beliefs. There is no right or wrong answer when it comes to spirituality, and everyone's experience is unique. The goal of the discussion should be to learn more about each other's beliefs and to explore how spirituality can impact our daily lives.Here are some tips for leading a discussion on spirituality:
Be respectful of all beliefs.
Encourage open and honest discussion.
Share your own experiences, if you are comfortable doing so.
Ask questions to learn more about each other's beliefs.
Be open to new ideas.
Remember that there is no right or wrong answer when it comes to spirituality.Here are some questions you could ask to start the discussion:
What does spirituality mean to you?
How does spirituality impact your daily life?
What are some of your favorite spiritual practices?
What are some of the challenges you face in your spiritual journey?
What advice would you give to someone who is new to spirituality?I hope this helps!
Response from Microsoft Bing:
Leading a discussion session on spirituality can be a great way to explore this topic with others. Here are some tips that might help you:
Start with an icebreaker activity that allows participants to introduce themselves and get to know each other better.
Be clear about your expectations and intentions for the discussion session.
Use inclusive language that welcomes everyone to participate.
Ask open-ended questions that encourage participants to share their thoughts and feelings.
Listen actively to what others are saying and show respect for their opinions.
Encourage participants to share their own experiences and insights.
Be prepared to handle any conflicts or disagreements that may arise during the discussion.
Spirituality can mean different things to different people, but it generally refers to a sense of connection with something greater than oneself. It can involve a belief in God or a higher power, but it can also be more broadly defined as a sense of purpose or meaning in life.I hope this helps! Let me know if you have any other questions.
Friday, June 09, 2023
A new place for consciousness in our understanding of the universe
Wednesday, June 07, 2023
Negativity drives online news consumption
Why paying attention to the news can come close to being a receipe for clinic depression... An open source article from Robertson et al. Their abstract:
Online media is important for society in informing and shaping opinions, hence raising the question of what drives online news consumption. Here we analyse the causal effect of negative and emotional words on news consumption using a large online dataset of viral news stories. Specifically, we conducted our analyses using a series of randomized controlled trials (N = 22,743). Our dataset comprises ~105,000 different variations of news stories from Upworthy.com that generated ∼5.7 million clicks across more than 370 million overall impressions. Although positive words were slightly more prevalent than negative words, we found that negative words in news headlines increased consumption rates (and positive words decreased consumption rates). For a headline of average length, each additional negative word increased the click-through rate by 2.3%. Our results contribute to a better understanding of why users engage with online media.
Monday, June 05, 2023
A simple heuristic for distinguishing lie from truth
Decades of research have shown that people are poor at detecting deception. Understandably, people struggle with integrating the many putative cues to deception into an accurate veracity judgement. Heuristics simplify difficult decisions by ignoring most of the information and relying instead only on the most diagnostic cues. Here we conducted nine studies in which people evaluated honest and deceptive handwritten statements, video transcripts, videotaped interviews or live interviews. Participants performed at the chance level when they made intuitive judgements, free to use any possible cue. But when instructed to rely only on the best available cue (detailedness), they were consistently able to discriminate lies from truths. Our findings challenge the notion that people lack the potential to detect deception. The simplicity and accuracy of the use-the-best heuristic provides a promising new avenue for deception research.
Friday, June 02, 2023
Gender inequality is associated with differences between the brains of men and women
Sobering but not surprising analysis by Jugman et al.:
Significance
Gender inequality is associated with worse mental health and academic achievement in women. Using a dataset of 7,876 MRI scans from healthy adults living in 29 different countries, we here show that gender inequality is associated with differences between the brains of men and women: cortical thickness of the right hemisphere, especially in limbic regions such as the right caudal anterior cingulate and right medial orbitofrontal, as well as the left lateral occipital, present thinner cortices in women compared to men only in gender-unequal countries. These results suggest a potential neural mechanism underlying the worse outcome of women in gender-unequal settings, as well as highlight the role of the environment in the brain differences between women and men.Abstract
Gender inequality across the world has been associated with a higher risk to mental health problems and lower academic achievement in women compared to men. We also know that the brain is shaped by nurturing and adverse socio-environmental experiences. Therefore, unequal exposure to harsher conditions for women compared to men in gender-unequal countries might be reflected in differences in their brain structure, and this could be the neural mechanism partly explaining women’s worse outcomes in gender-unequal countries. We examined this through a random-effects meta-analysis on cortical thickness and surface area differences between adult healthy men and women, including a meta-regression in which country-level gender inequality acted as an explanatory variable for the observed differences. A total of 139 samples from 29 different countries, totaling 7,876 MRI scans, were included. Thickness of the right hemisphere, and particularly the right caudal anterior cingulate, right medial orbitofrontal, and left lateral occipital cortex, presented no differences or even thicker regional cortices in women compared to men in gender-equal countries, reversing to thinner cortices in countries with greater gender inequality. These results point to the potentially hazardous effect of gender inequality on women’s brains and provide initial evidence for neuroscience-informed policies for gender equality.
Wednesday, May 31, 2023
Constructing Self and World
There is a strong similarity between the predictive processing brain model that has been the subject of numerous Mind Blog Posts, and the operations that ChatGPT and other generative pre-trained transformer algorithms are performing, with the ‘priors’ of the predictive processing model being equivalent to the ‘pre-trained’ weightings of the generative transformer algorithms.
The open and empty awareness of the non-dual perspective corresponds to the ‘generator’ component of the AI algorithms. It is what can begin to allow reification - rendering opaque rather than transparent - the self model and other products of the underlying content-free open awareness generator (such as our perceptions of trees, interoceptive signals, cultural rules, etc.). It enables seeing rather than being the glass window through which you are viewing the tree in the yard. The rationale of non-dual awareness is not to have ‘no-self.’ The ‘self’ prior is there because it is a very useful avatar for interactions. Rather, the non-dual perspective can enable a tweaking or re-construction of previously transparent priors - now rendered opaque - that lets go of their less useful components. The point of having an expanded 'no self' is to become aware of and refine the illusions or phantasies about what is in our internal and external worlds that rise from it.
The paragraphs above derive from my listening to one of Sam Harris’ podcasts
in his “Making Sense” series titled “Constructing Self and World.” It was a conversation with Shamil Chandaria, who is a philanthropist, serial
entrepreneur, technologist, and academic with multidisciplinary
research interests. During the conversation a number of ideas I am
familiar with were framed in a very useful way, and I wanted to put them down and pass on to MindBlog readers the thumbnail summary above.
Monday, May 29, 2023
To fulfill its promise, artificial intelligence needs to deepen human intelligence.
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up...A question to ask about large language models, then, is where does trustworthiness not matter?...A.I. will be great for creating content where reliability isn’t a concern. The personalized video games and children’s shows and music mash-ups and bespoke images will be dazzling...But where reliability matters — say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions — deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
...Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second...Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?...Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains — though real — are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds...My third concern is related to this use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing...It’s the time spent inside an article or book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters...No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that “if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.” I think the trade-off here was worth it — I am, after all, a writer — but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us. We failed that test with the internet. Let’s not fail it with A.I.
Friday, May 26, 2023
When Their Idea of Liberty Is Your Idea of Death
This post gives a snapshot of another really excellent Thomas Edsall piece, provided courtesy of Chat GPT-4, making the central points accessible to MindBlog readers who are blocked by the NYTimes paywall.
In this essay, Thomas Edsall explores the concept of freedom as a contested ideal in American politics, particularly in the run-up to the 2024 election.
President Biden, in announcing his re-election bid, frames freedom as being under threat by MAGA extremists. He emphasizes defending democracy, personal freedom, and civil rights, portraying the election as a choice between more or less freedom, more or fewer rights.
The essay also highlights contrasting views from Florida Governor Ron DeSantis, who views the threat to freedom as coming not only from government actions, but also from powerful institutions pushing a "woke" agenda. DeSantis criticizes elites in federal bureaucracy, media, Big Tech, and universities for using undemocratic means to impose their ideology on society.
Edsall cites the political philosopher Isaiah Berlin's two concepts of liberty: negative freedom, being free from interference by others, and positive freedom, the individual's desire to be their own master. This dichotomy is mirrored in the divergent notions of freedom espoused by figures such as George Wallace and Martin Luther King Jr.
The essay also examines differing political interpretations of freedom, from Biden's liberal tradition linked to F.D.R., emphasizing meeting material needs and democratic institutions, to Reagan's view of freedom as being largely free from government interference. Smith argues that Trump and DeSantis' notion of freedom is more constrained and restrictive, supporting democracy only as long as it produces the results they want.
Edsall points out that both the left and the right have imposed limitations on freedom. He notes conservative Republicans' restrictions on teaching about race and sex, banning books, suppressing voting, and barring local ordinances. Meanwhile, left-leaning students and faculty have sought to "cancel" figures who violate progressive orthodoxy, disrupt conservative speakers, and restrict teaching material considered harmful to marginalized groups.
Democrats and Republicans are competing to define and advocate for freedom. Isabel V. Sawhill from the Brookings Institution argues that Democrats have substance behind their freedom rhetoric, citing battles over abortion rights and Republicans' changing attitude towards the business community.
Francis Fukuyama highlights the rise of Trumpist populism and the shift of American conservatives towards a more European conservative approach. He argues that the right is using exaggerated fears of the "woke" agenda to justify authoritarian assaults on democracy, but Edsall points out that some voters view liberal policies as infringing on their freedom.
The essay examines homelessness as an ongoing debate over freedom and how progressives historically took the lead in defining and advancing freedom. William Galston argues that progressives lost their command of freedom in the 1960s, allowing conservatives to claim it.
As the 2024 election approaches, both parties have some favorable signs in polls. Republicans can point to Biden's vulnerability and conservative attitudes towards transgender issues. Democrats can highlight the public's opposition to book banning and strict abortion bans. The essay concludes by emphasizing the importance of the election in determining the nation's direction on freedom and liberty, and warns Democrats not to take anything for granted.
Wednesday, May 24, 2023
Using AI to decipher words and sentences from brain scans
A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.
Monday, May 22, 2023
Multilevel cultural evolution - new theory and practical applications
I want to point to a exceptionally lucid and well written expostion by David Sloan Willson et al. and pass on the opening paragraphs of their article that frames the context for understanding how group level selection operates at multiple levels, from cells to societies to the entire earth system. I strongly recommend reading through it slowly and carefully. If that seems a bit much, skip down to the section titled "The earth system as the ultimate unit of selection.' (added note: for a comprehensive game theoretic analysis of evolutionary dynamics within and among competing groups see Cooney et al.)
Darwin’s theory of evolution is celebrated for its explanatory scope, prompting the geneticist Theodosius Dobzhansky to declare in 1973 that “nothing in biology makes sense except in the light of evolution”. However, what became the “modern synthesis” can also be called the “great constriction.” The study of evolution was confined almost entirely to genetic evolution, relegating the study of human cultural change to other disciplines.
It was not until the 1970s that evolutionary thinkers started to go back to basics by defining natural selection as Darwin did—any process that combines the triad of variation, selection, and replication—irrespective of the proximate mechanisms. The first mathematical models of cultural evolution were based on population genetics models developed 50 y earlier.
Today, the study of cultural evolution in humans and other species is in full swing —and these advances in basic scientific knowledge have practical applications. In this article, we will first review major developments in our basic understanding of human cultural evolution. Then, we will show how they can be applied to a diversity of positive change efforts, no matter what the scale (e.g., from the individual person to global governance) or topic domain. We elaborate for the topics of complex systems science and engineering, economics and business, mental health and well-being, and global change efforts.
Here is the text from the section near the end of the article titled "The earth system as the ultimate unit of selection.'
The earth system as the ultimate unit of selection.
We have seen that multilevel selection is like a perverse alchemist who turns gold into lead. Self-preservation—a good thing—becomes disruptive selfishness. Helping kith and kin—a good thing—becomes cronyism and nepotism. The welfare of my nation—a good thing—leads to international conflicts. Thriving economies—a good thing—leads to overheating the earth. Nearly everything that is pathological at higher scales can be traced to behaviors that are prosocial at smaller scales.
The only solution to this problem is for policies to be formulated with welfare of the whole-earth system in mind. This is not sufficient by itself, as we will elaborate below, but the basic logic of multilevel selection reveals that it is necessary. There is no invisible hand to permute lower-level interests into higher-level welfare other than our own conscious efforts.
Superficially, it might seem that selection at the planetary scale is impossible because our planet is not competing with any other planets. What makes planet-level selection possible is a decision-making process that makes planetary welfare the target of selection, orients variation around the target, and identifies and replicates better practices, realizing they will be sensitive to context. This is how conscious cultural evolution takes place at smaller scales, as described in the previous sections, and can also take place at the global scale.
The concept of the whole earth as a cooperative system and the primary social identity of an individual was beyond the imagination only a few centuries ago. Nevertheless, when it comes to cultural evolution, the past does not predict the future. Given the myriad forms of globalization that have taken place during the last century, it is difficult not to consider the whole earth as a single system that must transition from CAS2 (“survive”) to CAS1 (“thrive”). Human social groups are nearly always socially constructed. To say “I am first and foremost a human being and citizen of the earth” is no more difficult than to say “I am an American” or “I am a Christian.”
Many people have already adopted a whole-earth ethic, which does manifest as action to a degree—but they do not have a common and authoritative theoretical framework to invoke and from which to derive effective policies. This is in contrast to neoclassical economics and its elaborate mathematical justification of the invisible hand metaphor. Multilevel selection reveals the invisible hand metaphor to be profoundly untrue. It is simply not the case, in economics or any other policy domain, that the lower-level pursuit of self-interest robustly benefits the common good. However, multilevel selection does lead to another, more legitimate conception of the invisible hand metaphor. We must act in two capacities: as designers of whole systems and as participants in the systems that we design. As designers, we must have the welfare of the whole system in mind, which is the opposite of the invisible hand metaphor. As participants, we can indeed respond to our local concerns without having the whole system in mind. Put another way, selection at the level of whole systems is the hand, which winnows the small set of lower-level behaviors that benefit the common good from the much larger set of lower-level behaviors that undermine the common good.
Friday, May 19, 2023
Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age
I want to share with MindBlog readers the background material prepared by Austin Rainbow Forum member Daniel Owen to support our Sunday June 4 2023 discussion on the conflict between the ideals of liberty, equality, and fraternity:
Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age
The French motto of liberty, equality, and fraternity has symbolized democracy and human rights since the French Revolution. Our U.S. Declaration of Independence declares, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” No mention of fraternity, but in the 1835 political theory classic “Democracy in America,” the French political philosopher Alexis d Tocqueville observed this about American society:
“Americans of all ages, all conditions, all minds constantly unite. Not only do they have commercial and industrial associations in which all take part, but they also have a thousand other kinds: religious, moral, grave, futile, very general and very particular, immense and very small; Americans use associations to give fêtes, to found seminaries, to build inns, to raise churches, to distribute books, to send missionaries to the antipodes; in this manner they create hospitals, prisons, schools. Finally, if it is a question of bringing to light a truth or developing a sentiment with the support of a great example, they associate. Everywhere that, at the head of a new undertaking, you see the government in France and a great lord in England, count on it that you will perceive an association in the United States.”
Perhaps the ideal of fraternity as civic-mindedness was just part of the character of early American culture. Is it still, or have we lost that?
Earlier this year, I listened to a talk by a Unitarian minister who suggested that liberty, equality, and fraternity are like a three-legged stool. If one of the legs becomes too long or too short, the stool is unbalanced. He thought our culture was out of balance with too much emphasis on individual liberty at the expense of equality and fraternity. Agree? Disagree?
How do we find a balance?
How do we deal with differences between the political left and right regarding what form these ideals should take?
How relevant are these ideals today in a globalized world dominated by neoliberalism?
How can we reconcile “We hold these truths to be self-evident…” with the relativism of postmodernism, where such declarations may be seen as socially constructed metanarratives used to advance the power and interests of some groups at the expense of others?
One possible response is to rethink what these principles mean in the 21st century. Liberty does not have to mean unlimited freedom to pursue one’s own agenda. It can also mean freedom to participate in democratic decision-making, to express one's identity and culture, and to access education, health care, and other public goods. Equality does not have to mean uniformity or conformity. It can also mean respect for diversity, human rights, and social justice. Fraternity does not have to mean exclusion or nationalism. It can also mean solidarity, empathy, and mutual aid among people of different backgrounds, beliefs, and interests.
Thoughts?
Recommended reading/listening:
When Their Idea of Liberty is Your Idea of Death (12-minute read)
What It Means to Be Woke (5-minute read)
What Is Postmodernism? (12-minute video)
Extra Credit: Why We're Still Postmodern (56-minute podcast.
A
lively discussion of postmodernism, neoliberalism, and contemporary
culture from a very different angle than the previous video. Worth your
time)
Wednesday, May 17, 2023
An evolutionary psychology perspective on the politics of leadership
Thomas Edsall does another of his scholarly analyses, this time putting Trump as the wannabe alpha male in a larger evolutionary context. I started this post with a number of clips from Edsall's text, but then decided to have another go at asking ChatGPT 4 to do a summary for me. In response to my request "Please condense and summarize the following essay by Thomas Edsall" it did a quite credible job, and I'm going to be lazy and pass on what the bot did!
Thomas Edsall's essay discusses how Donald Trump's alpha-male image conflicts with the barrage of criminal investigations and civil suits he faces, threatening his self-portrayal and traditional leadership standards. Trump's bid for dominance never garnered majority support, yet he still poses a serious threat to American democracy.
The essay delves into the evolutionary and neuroendocrine foundations of human aggression, highlighting the crucial role of coalition formation in overcoming coercive alpha males. Early sub-elite males formed coalitions to overthrow alpha males who violated social norms, paving the way for a more egalitarian society. This development has been traced to the roots of democracy, as it eventually led to the state's acquisition of police power and the legal use of force to enforce norms and laws.
The criminal investigations and civil suits against Trump represent America's democratic system of government, attempting to constrain a deregulated alpha-male wannabe. Over time, complex systems have evolved to limit the power of leaders, like coalitions, power-sharing agreements, parliaments, and constitutions. However, the dynamic of dominance persists, as voters often favor taller candidates and crave strong leaders.
Prof. Rose McDermott explains the process of self-domestication in human societies, where beta and gamma males work together to unseat coercive alpha males who exploit the community, leading to more egalitarian dynamics. McDermott views former President Trump as an example of a coercive alpha male, and suggests that the polarization in the U.S. has prevented the formation of coalitions strong enough to oppose him.
Some academics disagree with this biological explanation for modern social behavior, such as Prof. John Horgan, who finds it deterministic and promoting fatalism, and Prof. R. Brian Ferguson, who disputes the idea of alphas facing death due to sub-alpha elite coalitions. On the other hand, Prof. Dan McAdams argues that Trump's personality and authoritarian dynamic align with an older, evolutionarily-driven paradigm of dominance.
Prof. Kevin Smith attributes the rise of coercive alpha males and other unprincipled personalities in politics to the weakening of democratic norms, pointing out that these norms are difficult to institutionalize and easy to destroy. Once gone, they may be difficult to re-establish, leaving the political system vulnerable to demagogues and tyrants.
Monday, May 15, 2023
People who talk too much
I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World." The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges:
-When possible, say nothing
-Master the power of the pause
-Quit social media
-Seek out silence
-Learn how to listen
Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial. (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)
Friday, May 12, 2023
Virality
This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the seventh section of her book, titled "Virality"
The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.
The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.
When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.
The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.
Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”
If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.
If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.” She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”
Wednesday, May 10, 2023
MindBlog paragraphs bloviating on the nature of the self ask Google Bard and Chat GPT 4 for help
I can’t resist passing on this example of checking out what Google Bard and OpenAI’s Chat GPT-4 might do with some of my opaque paragraphs. I asked them to clarify and improve the following two paragraphs, which they did (Microsoft Bing again proved to be useless).
My paragraphs:
There can be a calm in the understanding that our transient Anthropocene era is a brief blip in the history of this planet and the cosmos. A disappearance of Homo Sapiens as we know it seems a certainty, a transition either to extinction or to a further evolved trans-human form. It doesn’t seem useful to become agitated about this, but rather live fully in the present moment as grateful witnesses to what is unfolding.
There is an implicit ontology and epistemology of the experienced self underlying underlying this ‘pay attention in the present’ stance…an implicit assumption of the nature of the ‘what it is’ that is the resident experienced self. The ontological assumption is that this ‘what it is’ is an emergent property of processes that can not be known - there is no there there. The epistemological assumption is that everything we experience - all knowledge, thought and feeling - are illusions that can be known only with respect to each other.
Here is Chat GPT 4’s concise effort, a slightly edited version of its second regenerated response, which was a bit better than its first version:
There can be a sense of serenity in acknowledging that our fleeting Anthropocene epoch represents just a brief moment in the vast history of our planet and the cosmos. The eventual transformation or disappearance of Homo sapiens as we know it is inevitable, whether through extinction or evolution into a more advanced transhuman form. Becoming overly concerned about this outcome is not particularly helpful; instead, we should fully embrace the present moment and appreciate the unfolding events as grateful observers.
An implicit ontology and epistemology of the experienced self underlie this 'live in the present' approach, suggesting certain assumptions about the nature of the self that we experience. The ontological assumption posits that the self is an emergent property of processes that cannot be truly understood, signifying that there is no fixed essence to be found. The epistemological assumption claims that all of our experiences, including knowledge, thoughts, and feelings, are illusory and can only be understood in relation to one another.
And here is Google Bard’s longer, and also excellent version:
The Anthropocene era is a brief blip in the history of this planet and the cosmos. It is certain that Homo Sapiens will eventually disappear, either through extinction or through evolution into a new form. It is therefore not useful to become agitated about this, but rather to live fully in the present moment as grateful witnesses to what is unfolding.
This "pay attention to the present" stance is based on an implicit ontology and epistemology of the experienced self. The ontological assumption is that the experienced self is an emergent property of processes that cannot be known directly. In other words, the self is not a thing, but rather a process that emerges from the interaction of many different factors, such as our genes, our environment, and our experiences. The epistemological assumption is that all knowledge, thought, and feeling are illusions that can only be known with respect to each other. In other words, we can never know anything directly, but only through the lens of our own subjective experience.
This view of the self has several implications. First, it means that we should not take ourselves too seriously. We are simply a product of our genes, our environment, and our experiences. We are not in control of our own destiny, and we should not try to be. Second, it means that we should not be afraid of change. The self is constantly changing, and this is a good thing. It allows us to grow and evolve. Third, it means that we should be grateful for the present moment. The present moment is all that we have, and it is the only moment that we can truly experience.
Monday, May 08, 2023
Can a theory of events-in-time - our future history - ever be crafted?
I first drafted this as I usually do to pass on clips that attempt to extract the core points from a fascinating brief piece by Venkatesh Rao. I then decided to ask Chat GPT 4, Google's bard and Microsoft Bing to "Please condense the core ideas in the following document into a few paragraphs:" The response from Chat GPT 4 was amazing, so I paste it in here before giving you my clips :
Venkatesh Rao explores the idea of the "End of History" as an analogy to the "End of Biology," where biology transitioned from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter. The End of History could signify a move away from vitalist understandings of the logic of events-in-time. Rao acknowledges that the End of History theory is not widely popular due to its non-constructive nature, as it subverts vitalist understanding of events without providing a parsimonious and generative alternative.
Rao suggests that deeper logics of events-in-time exist in nature, with biological evolution and thermodynamics offering more complex explanations than a mere "one damn thing after another" perspective. He proposes that events-in-time might be viewed as the trace of an irreversible, non-ergodic computation process, with long-term collective memory playing a significant role. History, as we have understood it so far, might be considered a particular memory architecture with a specific irreversible and non-ergodic structure to the evolving computation. This perspective could potentially pave the way for a more refined and deliberate understanding of future events, reducing events-in-time after the End of History to a specialized kind of computation.
Here are my clips that Chat GPT 4 used to generate the above condensation:
What if the Hegel-Fukuyama idea of the End of History is something like Darwin’s idea of natural selection?
The Darwinian century began with The Origin of the Species in 1859 and ended with the discovery of the structure of DNA in 1953. Humanity experienced an End of Biology moment somewhere between those bookend events…a demotion of the discipline from an independent area of intellectual inquiry to one subservient to the sciences of non-living matter…Biology went from being an inscrutable aspect of providence to an emerging engineering discipline, subservient to physics and mathematics by way of chemistry.
By analogy, the End of History moment is something like an end to vitalist understandings of the logic of events-in-time…There is no role for divine agency, and no justification for assigning a particular positive or negative valence to apparent secular tendencies in the stream of events…The fact that the theory is historicist without being normative is perhaps what makes it so powerfully subversive. The End of History theory is the historicism that kills all other historicisms. Past the End of History, notions like progress must be regarded as analogous to notions like élan vital past the End of Biology. …it is undeniable that 30 years in, the End of History theory is still not particularly popular…One obvious reason is that it is non-constructive. It subverts a vitalist understanding of events in time without supplying a more parsimonious and generative alternative.
In Fukuyama’s theory, there are no notions comparable to variation and natural selection that allow us to continue making sense of events-in-time. There are no Mendelian clues pointing to something like a genetics of events-in-time. There is no latent Asimovian psychohistorical technology lurking in the details of the End of History theory…Perhaps one damn thing after another is where our understanding of events in time ought to end, for our own good.
I think this is too pessimistic though. Deeper logics of events-in-time abound in nature. Even biological evolution and thermodynamics, which are more elemental process aspects of reality, admit more than a one damn thing after another reading. History, as a narrower class of supervening phenomena that must respect the grammars of both, ought to admit more interesting readings, based on broadly explanatory laws that are consistent with both, but more specific than either. Dawkins’ memetic view of cultural evolution, and various flavors of social darwinism, constitute first-order attempts at such laws. Some flavors of cosmism and transhumanism constitute more complex attempts that offer hope of wresting ever-greater agency from the universe.
So what does explain the logic of events-in-time in a way that allows us to make sense of events-in-time past the End of History, in a way that improves upon a useless one damn thing after another sense of it, and says something more than the laws of evolution or thermodynamics?
I don’t have an answer, but I have a promising clue: somehow, events-in-time must be viewed as the trace of an irreversible, non-ergodic computation process, in which long-term collective memory plays a significant role.
History, as we have understood it so far, is something like a particular memory architecture that assumes a particular irreversible and non-ergodic structure to the evolving computation. The contingency and path dependence of events-in-time in human affairs is no reason to believe there cannot also be theoretical richness within the specificity. A richness that might open up futures that can be finely crafted with a psychohistorical deliberateness, rather than simply vaguely anticipated and crudely shaped.
Perhaps, just as life after the End of Biology was reduced to a specialized kind of chemistry, events-in-time, after the End of History, can be reduced to a specialized kind of computation.
Friday, May 05, 2023
The Data Deluge - Dataism
This post is the eighth installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapters 11 and 12 from the sixth section of her book, titled "Algorithm."
Chapter 11
In the year 2001 alone, the amount of information generated doubled that of all information produced in human history. In 2002 it doubled again, and this trend has continued every year since. As Anderson noted, researchers in virtually every field have so much information that it is difficult to find relationships between things or make predictions.
What companies like Google discovered is that when you have data on this scale, you no longer need a theory at all. You can simply feed the numbers into algorithms and let them make predictions based on the patterns and relationships they notice…
“Google Translate “learned” to translate English to French simply by scanning Canadian documents that contained both languages, even though the algorithm has no model that understands either language.
These mathematical tools can predict and understand the world more adequately than any theory could. Petabytes allow us to say: ‘Correlation is enough,’…We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can let statistical algorithms find patterns where science cannot. Of course, data alone can’t tell us why something happens—the variables on that scale are too legion—but maybe our need to know why was misguided. Maybe we should stop trying to understand the world and instead trust the wisdom of algorithms…technologies that have emerged .. have not only affirmed the uselessness of our models but revealed that machines are able to generate their own models of the world…this approach makes a return to a premodern epistemology..If we are no longer permitted to ask why…we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment...
Deep learning, an especially powerful brand of machine learning has become the preferred means of drawing predictions from our era’s deluge of raw data. Credit auditors use it to decide whether or not to grant a loan. The CIA uses it to anticipate social unrest. The systems can be found in airport security software…many people now find themselves in a position much like Job’s, denied the right to know why they were refused a loan or fired from a job or given a likelihood of developing cancer. It’s difficult, in fact, to avoid the comparison to divine justice, given that our justice system has become a veritable laboratory of machine-learning experiments…In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”
Hans Blumenberg, the postwar German philosopher, notes in his 1966 book The Legitimacy of the Modern Age—one of the major disenchantment texts—that theologians began to doubt around the thirteenth century that the world could have been created for man’s benefit…Blumenberg believed that it was impossible to understand ourselves as modern subjects without taking into account the crisis that spawned us. To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually returns to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real . The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.
Chapter 12
Here are a smaller number of clips from the last section of Chapter 12, on the errors of algorithms.
It’s not difficult to find examples these days of technologies that contain ourselves “in a different disguise.” Although the most impressive machine-learning technologies are often described as “alien” and unlike us, they are prone to errors that are all too human. Because these algorithms rely on historical data—using information about the past to make predictions about the future—their decisions often reflect the biases and prejudices that have long colored our social and political life. Google’s algorithms show more ads for low-paying jobs to women than to men. Amazon’s same-day delivery algorithms were found to bypass black neighborhoods. A ProPublica report found that the COMPAS sentencing assessment was far more likely to assign higher recidivism rates to black defendants than to white defendants. These systems do not target specific races or genders, or even take these factors into account. But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.
It is much easier…to blame injustice on faulty algorithms than it is to contend in more meaningful ways with what they reveal about us and our society. In many cases the reflections of us that these machines produce are deeply unflattering. To take a particularly publicized example, one might recall Tay, the AI chatbot that Microsoft released in 2016, which was designed to engage with people on Twitter and learn from her actions with users. Within sixteen hours she began spewing racist and sexist vitriol, denied the Holocaust, and declared support for Hitler.
For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths. “The desire to send humans to space was for her a metaphor for this dream of scientific transcendence. She tried to imagine what the earth and terrestrial human activity must look like from so far beyond its surface:
“If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats. Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is “to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.”
The problem is that a vantage so far removed from human nature cannot account for human agency. The view of earth from the Archimedean point compels us to regard our inventions not as historical choices but as part of an inexorable evolutionary process that is entirely deterministic and teleological, much like Kurzweil’s narrative about the Singularity. We ourselves inevitably become mere cogs in this machine, unable to account for our actions in any meaningful way, as the only valid language is the language of quantification, which machines understand far better than we do.
This is more or less what Jaron Lanier“warned about in his response to Chris Anderson’s proposal that we should abandon the scientific method and turn to algorithms for answers. “The point of a scientific theory is not that an angel will appreciate it,” Lanier wrote. “Its purpose is human comprehension. Science without a quest for theories means science without humans.” What we are abdicating, in the end, is our duty to create meaning from our empirical observations—to define for ourselves what constitutes justice, and morality, and quality of life—a task we forfeit each time we forget that meaning is an implicitly human category that cannot be reduced to quantification. To forget this truth is to use our tools to thwart our own interests, to build machines in our image that do nothing but dehumanize us.
Thursday, May 04, 2023
Yuval Harari's vision of the end of human history.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures...Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it...What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?
What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.
...since ancient times humans have feared being trapped in a world of illusions...In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality. In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of AI tools in the public sphere, and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone, and I cannot tell whether it is a human or an AI—that’s the end of democracy.
This text has been generated by a human.
Or has it?