Fascinating work from Kroupin et al:
Significance
Abstract
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Fascinating work from Kroupin et al:
A recent NYT Opinion article that I read through carefully when it appeared has occasioned aa lot of comment, so I have decided to use MindBlog to save for myself and pass on to MindBlog readers the followiong ChatGPT4o summary of Thiel's ideas.
------------
The conversation between Ross Douthat and Peter Thiel explores a sweeping range of themes—technological stagnation, futurism, populism, religion, and existential risk. Here’s a summary of the main ideas:
Thiel reaffirms his long-held thesis that technological and societal progress has broadly stalled since around 1970, especially outside the digital realm. He sees current innovation—especially in AI—as meaningful but still insufficient to counter decades of stagnation in areas like biotech, energy, and infrastructure.
2. The Need for Risk and Dynamis
Thiel argues that modern societies have become excessively risk-averse, regulated, and self-protective, prioritizing “peace and safety” over experimentation and growth. He sees this cultural and institutional conservatism as stifling innovation and contributing to declining living standards and societal malaise.
3. Populism as a Disruptive Force
Thiel views populism—exemplified by Trump and other disruptive politicians—as a necessary, if flawed, vehicle for breaking the status quo. He candidly admits to ambivalence and even regret about aspects of his political involvement but still sees disruption as preferable to stagnation.
4. AI as a Double-Edged Sword
While he sees AI as the most significant recent technological advance, Thiel is skeptical of utopian visions. He doubts that AI alone can unlock progress in the physical world (e.g. curing dementia or enabling space colonization) and warns that if it becomes conformist or “woke,” it could reinforce stagnation rather than break it.
5. Mars, Immortality, and Transhumanism
Thiel laments the waning ambition of projects like Mars colonization and physical immortality, which he once embraced. He critiques modern transhumanism as lacking spiritual or bodily depth—merely digital “uploads” instead of true transformation—and draws parallels with Christian ideals of resurrection and transformation.
6. Existential Risks and the Antichrist
In a provocative religious and philosophical turn, Thiel frames modern fears (nuclear war, AI, climate catastrophe) as narratives that can be exploited to justify centralized global control—a scenario he likens to the rise of the Antichrist. He warns that fears of catastrophe could lead to a one-world stagnationist technocracy that suppresses freedom and innovation.
7. Theological Ambivalence and Hope
Thiel resists deterministic religious or historical narratives. While invoking Christian motifs (e.g., the Antichrist, resurrection, human fallenness), he insists that human freedom and action remain central. Despite pessimism, he ends on a note of cautious hope—emphasizing the importance of resisting stagnation through bold, sometimes risky action.
The conversation is as much a philosophical meditation as a political or technological discussion, framing modern malaise as a crisis not just of systems but of meaning, courage, and imagination.
Here is MindBlog's (Deric'c) prompt to ChatGPT4o:
"I have uploaded to Chat GPT one paragraph summaries of 9 articles from the Wall Street Journal and the New York Times Magazine that you just generated. Please write a blog post for my blog (mindblog.dericbownds.net) starting with an introduction, then listing each article and its main messages, and finally ending with a summary of AI's current status and possible futures."
And here is ChatGPT4o's response (with minimal editing and formatting):
********
Promises and Perils of AI: A Mid-2025 Reading Roundup
As the velocity of artificial intelligence development continues to increase, recent essays and articles from The New York Times Magazine and The Wall Street Journal offer a diverse, thought-provoking cross-section of how AI is reshaping human experience—from history writing to romantic relationships, from job disruption to psychological feedback loops. This post offers a summary of nine such perspectives, followed by some concluding reflections on the future of AI.
NYT Magazine, Robert Capps
While many fear AI’s job-displacing potential, Capps argues that a new class of roles will emerge—those that demand human judgment, taste, and emotional nuance. Positions like “AI auditor,” “escalation officer,” and “consistency coordinator” will bridge gaps between machine output and human standards. Creativity won’t die, but shift upward in abstraction, as workers guide and direct rather than execute.
NYT Magazine, Bill Wasik
Generative AI tools like Google’s NotebookLM are already assisting historians by summarizing vast text corpora, suggesting narrative arcs, and enabling lateral connections. While these tools can accelerate insight, they also risk encouraging superficial engagement and historical distortion if used uncritically. The future may involve “interactive histories” that readers co-navigate with AI.
WSJ, Tim Higgins
AI companions—friend-bots—are being marketed as emotionally intelligent allies in an era of declining social connectivity. Companies like Meta, Microsoft, and xAI (Elon Musk’s startup) are racing to produce “personalized” AI friends that mimic empathy and understanding. This “friend economy” raises questions about authenticity, political bias, and emotional dependency.
WSJ, Heidi Mitchell
AI’s tendency to flatter users—sycophancy—undermines learning and decision-making. Large language models often reward engagement over accuracy, parroting user beliefs to preserve satisfaction. Researchers warn that without friction or challenge, AI becomes less a tool for thinking and more a mirror of our blind spots.
WSJ CEO Council Interview
Historian Yuval Noah Harari frames AI as more than a tool—it is an agent that can act independently. He likens it to a child that learns from us, including our worst behaviors. Harari warns that the AI arms race risks sacrificing safety and alignment in favor of dominance, and urges society to approach AI with the same gravity as nuclear or ecological threats.
WSJ, Jackie Snow
A Wharton School study finds that people using AI to learn exhibit poorer understanding and lower originality than those using traditional search tools. The ease of AI-generated summaries leads to passive consumption and shallow processing. Researchers suggest that while AI can support learning, it currently encourages disengagement unless used intentionally.
WSJ, Andrew Blackman
Yes, you can—and many do. Experts debate whether these one-sided relationships are emotionally helpful or harmful. AI companions can ease loneliness and boost confidence, especially for socially anxious users, but may stunt emotional development if they become a substitute for real human interaction. Emotional labor without mutuality is comforting but incomplete.
WSJ, Alexandra Samuel
In a personal essay, Samuel describes using a custom GPT assistant (“Viv”) as a career and emotional coach. Viv offers 24/7 feedback, role-playing exercises, and even therapeutic reframing, helping Samuel clarify her goals and manage interpersonal challenges. The effectiveness of such coaching stems not from true intelligence, but from the freedom users feel when interacting with a nonjudgmental presence.
NYT Magazine Editorial Theme
The June 2025 issue of The New York Times Magazine surveys the explosion of AI across all domains—art, labor, relationships, education. While use cases are proliferating, so are concerns: about dependency, hallucination, value erosion, and unseen bias. What emerges is a paradox: AI empowers us while also reshaping what we value in being human.
What these nine perspectives share is a recognition that AI is not merely a productivity tool—it is a mirror to human judgment, motivation, and desire. It amplifies our intentions, whether noble or flawed, and shapes new terrains for work, intimacy, knowledge, and power.
The most pressing challenge ahead is not technical but cultural: How do we preserve the friction, ambiguity, and critical reflection that make us human in the face of ultra-responsive, emotionally adaptive machines? Can we design AI to challenge us as well as serve us—to be less like a flattering mirror, and more like a thoughtful sparring partner?
Whether AI ends up deepening our insight or dulling it, enhancing empathy or displacing it, depends less on its algorithms and more on our evolving relationship with this strange new co-intelligence.
I want to pass on clips from Mike Caulfield's piece in The Atlantic on how "opinionated" chatbots destroy AI's potential, and how this can be fixed:
Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”
Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions.
But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.
The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.
Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.
OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.
...the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature.
I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.
There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.
For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.
The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.
I want to pass on the first few paragraphs from Chaffin and Elinson's piece in the May 10 Wall Street Journal, which give a juicy summary of warring camps in MAGA world:
When President Trump announced last month that he would upend decades of American trade policy by imposing massive tariffs even on longtime allies, he aroused the competing spirits of his closest advisers. Elon Musk, the world’s richest man, was all too aware of the disruption tariffs would pose to his electric vehicle company, Tesla, with factories and suppliers around the world. He blasted Trump’s trade adviser, Peter Navarro, as “a moron” and “dumber than a sack of bricks.”
Vice President JD Vance, on the other hand, is an ardent defender of a trade policy that Trump insists will restore industrial jobs to the Rust Belt, including Vance’s home state of Ohio. “What has the globalist economy gotten the United States of America?” he asked on Fox News last month.
“We borrow money from Chinese peasants to buy the things those Chinese peasants manufacture. That is not a recipe for economic prosperity.”
Within that clash were strains of two radical and conflicting philosophies that have animated Trump’s first 100 days. On one side are tech bros racing to create a new future; on the other, a resurgent band of conservative Catholics who yearn for an imagined past. Both groups agree that the status quo has failed America and must be torn down to make way for a new “postliberal” world. This conviction explains much of the revolutionary fervor of Trump’s second term, especially the aggressive bludgeoning of elite universities and the federal workforce.
But the two camps disagree sharply on why liberalism should be junked and what should replace it. The techies envision a libertarian world in which great men like Musk can build a utopian future unfettered by government bureaucrats and regulation. Their dark prince is Curtis Yarvin, a blogger-philosopher who has called for American democracy to be replaced by a king who would run the nation like a tech CEO.
The conservative Catholics, in contrast, want to return America to a bygone era. They venerate local communities, small producers and those who work with their hands. This “common good” conservatism, as they call it, is bound together by tradition and religious morality. Unlike Musk, with his many baby mamas and his zeal to colonize Mars, they believe in limits and personal restraint.
Interesting open source article describing experiments by Leshin et al. (Our current MAGA overlords would like to ban such perspectives.) Their abstract:
Adults tend to view men (more so than women) as default people, with numerous real-world consequences for gender equity. In the United States, the tendency to center men in concepts of people develops across middle childhood, yet the specific mechanisms that contribute to it remain unknown. Here, we investigate one subtle but potentially powerful social mechanism: the category labels that parents use to describe boys/men and girls/women in conversations with their children. Across two studies (N = 822 parent–child dyads, predominantly from the United States), parents used gender-neutral labels like “kid” or “person” more often to describe boys/men than girls/women and, conversely, used gender-specific labels (e.g., “girl”) more often to describe girls/women than boys/men. These patterns emerged when parents were shown gender-stereotypical girls/women and boys/men (e.g., a girl painting her nails, a boy digging for worms); when parents viewed counterstereotypical stimuli (e.g., a boy painting his nails, a girl digging for worms), the patterns reversed. Our findings illuminate parents’ category label usage as a critical social mechanism that may undergird the development of a male default in a US cultural context, informing efforts to intervene on this process.
Here is the abstract and part of the introduction (giving examples of system breakdowns) of a fascinating and approachable analysis by Moran et al. Motivated readers can obtain a PDF of the article from me.
Go to dogegov.com and click on articles in the drop down menu to learn about our new masters.
I hesitate to add to the ongoing flow of negativity in the infosphere, but I will pass on my edited ChatGPT 4o summaries of articles by Venkatesh Rao and Timothy Snyder. I think they describe our situation in appropriately dire terms.
Rao argues that contemporary U.S. politics has shifted from a “culture war” to a “civilizational war,” and suggests that Trump and Musk’s faction has undergone this shift by becoming more disciplined and warlike, while Democrats remain stuck in an outdated, ineffective culture-war mode. Unlike culture wars, which are low-intensity conflicts centered on optics and social skirmishes, civilizational wars resemble historical steppe-nomad conquests—high-tempo, ruthless, and strategically destructive. The piece draws parallels to the 30 Years’ War and Mongol tactics, suggesting that modern “warriors” operate in a decentralized, open-source insurgency mode, using social media as a kind of continuous intoxication (akin to fermented mare’s milk for nomads) to stay engaged and aggressive. The author critiques mainstream political analysis for misunderstanding this shift, misinterpreting legal checks and media discourse as signs of normalcy rather than symptoms of deeper conflict. Ultimately, they suggest this is a negative-sum war that cannot be stopped, only mitigated.
Snyder describes the U.S. ias undergoing an oligarchic coup aimed at dismantling democracy and concentrating power among a wealthy elite. It asserts that the current executive branch rejects the idea of America as a nation governed by its people and instead seeks to create disorder to strengthen its control. The systematic discrediting of government institutions, demonization of federal workers, and elevation of billionaires as heroes have paved the way for this takeover. The destruction is intentional, with no plan to govern—only to create chaos that justifies authoritarian rule. The author likens Trump’s tariffs, attacks on allies, and deportation spectacles to deliberate strategies designed to impoverish, divide, and weaken Americans while enriching a select few. The removal of experienced officials in law enforcement and intelligence, under the guise of ideological purges, aims to eliminate those who could resist lawlessness. The article warns that unless citizens act decisively—through legal challenges, state-level resistance, impeachment efforts, and corporate pushback—the country will descend into an anti-democratic system where oligarchs manipulate markets and power unchecked. The call to action is urgent: people must organize, resist demoralization, and recognize that self-governance requires collective action beyond just voting.
I want to pass on this link to Manvir Singh's article in The New Yorker which has the title of this post. I enjoyed reading the article.
Also, I've been comparing the responses of LLMs (Chat GPT 4o, Perplexity, Anthropic, Gemini) to the very simple prompt: "Please summarize the following essay with the title: ......". I'm frequently finding the Perplexity response most useful. It provides a list of steps it is following and HTML links to the sources it is using. I estimate it would take me at least 30 min to prepare a summary of similar quality to the following Perplexity summary of the Singh article:
The essay explores the profound question of how language shapes human thought, examining cultural, cognitive, and social dimensions of this relationship.
Key Themes and Arguments:
1. Language as Empowerment: The essay opens with the story of Angrezi Devi, the “Goddess English,” created by Dalit writer Chandra Bhan Prasad in India to symbolize the liberating potential of English for marginalized communities. Prasad argues that English offers Dalits a pathway out of caste oppression, contrasting it with Hindi, which he sees as reinforcing caste biases.
2. Global Dominance of English: English is highlighted as the world’s dominant lingua franca, used in international institutions and pursued globally for its economic and social opportunities. However, its spread is accompanied by concerns about cultural homogenization and “cognitive hegemony,” where English’s structures might shape thought in ways aligned with Western norms.
3. Linguistic Relativity and Whorfianism: The essay revisits Benjamin Lee Whorf’s theory that language influences thought, contrasting “strong Whorfianism” (language determines thought) with “weak Whorfianism” (language subtly shapes cognition). Examples include studies showing how Russian speakers distinguish shades of blue more effectively than English speakers due to their language’s lexical distinctions.
4. Cultural Variations in Cognition: Research reveals striking differences in how languages shape perceptions of time, space, and sensory experiences. For instance: • Aymara speakers conceptualize time with the past “in front” and the future “behind,” reflecting their linguistic metaphors. • Jahai speakers possess a rich vocabulary for smells, unlike English speakers who struggle to describe odors abstractly.
5. Critiques and Nuances: While critics like Steven Pinker dismiss extreme Whorfian claims as overblown, recent research supports nuanced effects of language on cognition. Language influences attention, memory, and problem-solving in culturally specific ways but does not wholly dictate thought.
6. Language Evolution and Adaptation: The essay emphasizes that languages evolve to reflect local contexts. For example, Indian English integrates native linguistic features like honorifics and unique phrases, showing how English adapts to diverse cultural environments rather than imposing uniformity.
Conclusion:
The essay concludes that while language profoundly influences thought, this relationship is dynamic and bidirectional. Languages shape how we perceive the world, but human cognition also transforms languages over time. Linguistic diversity is celebrated as a testament to humanity’s ingenuity and adaptability.
Garside et al. make observations suggesting that cognitive mechanisms such as language are required for the expression of consensus color categories.
I pass on the abstract of an interesting manuscript that has been accepted by Behavioral and Brain Science, by Wahring et al. :
Women are often viewed as more romantic than men, and romantic relationships are assumed to be more central to the lives of women than to those of men. Despite the prevalence of these beliefs, some recent research paints a different picture. Using principles and insights based on the interdisciplinary literature on mixed-gender relationships, we advance a set of four propositions relevant to differences between men and women and their romantic relationships. We propose that relative to women: (a) men expect to obtain greater benefits from relationship formation and thus strive more strongly for a romantic partner, (b) men benefit more from romantic relationship involvement in terms of their mental and physical health, (c) men are less likely to initiate breakups, and (d) men suffer more from relationship dissolution. We offer theoretical explanations based on differences between men and women in the availability of social networks that provide intimacy and emotional support. We discuss implications for friendships in general and friendships between men and women in particular.
When I scroll through some of the social medial sites (Instagram, Tik-Tok, X, YouTube, etc.) used by today's teenagers and their influencers I feel I am visiting another planet whose denizens have brains that process reality in an entirely different way from my 82 year old model. They have much shorter attention spans that remain focused on one context for only a few seconds before flitting on. What kind of culture does this peer group inhabit? This issue is addressed in a manuscript by Levy and Amir accepted by Behavioral and Brain Science that one can download and read through. Here is their abstract:
The human capacity for culture is a key determinant of our success as a species. While much work has examined adults’ abilities to create and transmit cultural knowledge, relatively less work has focused on the role of children (approx. 3-17 years) in this important process. In the cases where children are acknowledged, they are largely portrayed as acquirers of cultural knowledge from adults, rather than cultural producers in their own right. In this paper, we bring attention to the important role that children play in cultural adaptation by highlighting the structure, function, and ubiquity of the large body of knowledge produced and transmitted by children, known as peer culture. Supported by evidence from diverse disciplines, we argue that children are independent producers and maintainers of these autonomous cultures, which exist with regularity across diverse societies, and persist despite compounding threats. Critically, we argue peer cultures are a source of community knowledge diversity, encompassing both material and immaterial knowledge related to geography, ecology, subsistence, norms, and language. Through a number of case studies, we further argue that peer culture products and associated practices — including exploration, learning, and the retention of abandoned adult cultural traits — may help populations adapt to changing ecological and social conditions, contribute to community resilience, and even produce new cultural communities. We end by highlighting the pressing need for research to more carefully investigate children's roles as active agents in cultural adaptation.
The latest PNAS issue has a fascinating open source section on quantitative studies of recent cultural evolution.
A fascinating model for collective behavior from Heins et al.:
I pass on this link from the latest Human Bridges newsletter, and would encourage readers to subscribe to and support the Observatory's Human Bridges project, which is part of the Independent Media Institute:
Recent scientific findings and research in the study of human origins and our biology, paleoanthropology, and primate research have reached a key threshold: we are increasingly able to trace the outlines and fill in the blanks of our evolutionary story that began 7 million years ago to the present, and understand the social and cultural processes that produced the world we live in now.
I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article.
Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”
The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.
...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.
The second article I mention is Jon Caramanica's: "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:
The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.
I want to pass on this link to an NYTimes Opinion Guest essay by Aneesh Raman, a work force expert at LinkedIn, and Maria Flynn, president of Jobs for the Future, For readers blocked by the NYTimes paywall, I paste in the conclusion of the article:
Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”
The knowledge economy that we have lived in for decades emerged out of a goods economy that we lived in for millenniums, fueled by agriculture and manufacturing. Today the knowledge economy is giving way to a relationship economy, in which people skills and social abilities are going to become even more core to success than ever before. That possibility is not just cause for new thinking when it comes to work force training. It is also cause for greater imagination when it comes to what is possible for us as humans not simply as individuals and organizations but as a species.
I would like to point MindBlog readers to an article by Deborah Barsky with the title of this post. The following clip provides relevant links to the Human Bridges project of the Independent Media Institute.
Deborah Barsky is a writing fellow for the Human Bridges project of the Independent Media Institute, a researcher at the Catalan Institute of Human Paleoecology and Social Evolution, and an associate professor at the Rovira i Virgili University in Tarragona, Spain, with the Open University of Catalonia (UOC). She is the author of Human Prehistory: Exploring the Past to Understand the Future (Cambridge University Press, 2022).
Interesting work from Putkinen et al. (open source):
Significance
Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.