Monday, December 29, 2025

What our brains are doing as we experience musical pleasure.

I've been playing more Schubert lately, drawn by his genius for inserting melodic sections within his piano works (sonatas, impromptus, fantasies, etc.) that give me sheer pleasure when I play them. (When I am beginning to wake in the morning, the passages play in my head and I can visualize both my fingers on the keys and the musical score. As I continue to wake, this all slips away.) 

These experiences made me perk up when I saw the article by Zatorre and collaborators in the Jan. 2026 issue of Journal of Cognitive Neuroscience. Here is their abstract (motivated readers can obtain a PDF of the article from me. It has some nice graphics.): 

The enjoyment of music involves a complex interplay between brain perceptual areas and the reward network. While previous studies have shown that musical liking is related to an enhancement of synchronization between the right temporal and frontal brain regions via theta frequency band oscillations, the underlying mechanisms of this interaction remain elusive. Specifically, a causal relationship between theta oscillations and musical pleasure has yet to be shown. In the present study, we address this question by using transcranial alternating current stimulation (tACS). Twenty-four participants underwent three different sessions where they received tACS over the right auditory cortex before listening to and rating a set of melodies selected to vary in familiarity and complexity. In the target session, participants received theta stimulation, while in the other two sessions, they received beta and sham stimulation, serving as controls. We recorded brain activity using EEG during task performance to confirm the effects of tACS on oscillatory activity. Results revealed that compared with sham, theta, but not beta, stimulation resulted in higher liking ratings specifically for unfamiliar music with low complexity. In addition, we found increased theta connectivity between the right temporal and frontal electrodes for these stimuli when they were most liked after theta stimulation but not after beta stimulation. These findings support a causal and frequency-specific relationship between music hedonic judgments and theta oscillatory mechanisms that synchronize the right temporal and frontal areas. These mechanisms play a crucial role in different cognitive processes supported by frontotemporal loops, such as auditory working memory and predictive processing, which are fundamental to music reward processing.

 

Friday, December 26, 2025

What are humans for? - AI Forces a Question We’ve Been Dodging

AI Is Already Fusing With Us — The Only Question Is How

The argument about whether AI will enhance or diminish our our humanity usually starts in the wrong place. People argue about what AI should do before asking a more basic and uncomfortable question: what are humans for?

Every powerful technology embeds an answer to that question. AI is no exception. In fact, it may be the most consequential case yet, because AI is not just a tool we use—it is a system that increasingly co-thinks with us, shapes our attention, edits our language, nudges our choices, and quietly rewires our sense of agency.

We are not facing a future of humans versus machines. We are already living in a world of human–machine fusion.

The real issue is not whether this fusion will happen. It is what kind of fusion it will be.

There Is No Such Thing as “Human Values”

Much of the public discussion of AI ethics rests on a fantasy: that “human values” exist as a coherent, global, agreed-upon set of principles that AI can be aligned with.

They don’t.

Value, purpose, and meaning are social constructions. They vary radically across cultures, religions, and political systems—and they always have. What one society calls a meaningful life, another calls wasted. What one treats as sacred, another treats as irrelevant or dangerous.

There is no global agreement on what humans are for. Expecting a worldwide technology like AI to be guided by a single, shared vision of human purpose is naïve at best, dishonest at worst.

There Will Be Many AIs, Just as There Are Many Religions

Because there is no single answer to what humans are for, there will be no single AI.

There will be many forms of AI, each reflecting the values—explicit or implicit—of the cultures, institutions, and power structures that create them. Some will be optimized for surveillance, compliance, and efficiency. Others will be built to extend memory, imagination, and self-understanding.

In blunt terms: some AIs will enslave us; others could help liberate us.

The enslaving versions will treat humans as components to be managed—predictable, optimizable, correctable. Friction will be treated as inefficiency. Deviance as error. Interior life as noise.

The liberating versions will function as prostheses for the mind—tools that expand rather than replace human capacities. They will support reflection rather than manipulation, agency rather than control, curiosity rather than conformity.

The difference is not technical. It is philosophical.

AI as Prosthesis vs. AI as Manager

We already accept prostheses for the body—glasses, hearing aids, pacemakers—because they restore or extend human function without claiming authority over the person. AI can play a similar role for cognition: augmenting memory, pattern recognition, synthesis, and imagination.

But there is a sharp boundary here.

A cognitive prosthesis supports a human agent.
A cognitive manager displaces one.

The danger is not that AI becomes intelligent. The danger is that it becomes normative—quietly deciding what matters, what counts, what is worth attention, and what is not.

Once that happens, humans do not disappear. They persist as interfaces.

The Question We Can’t Outsource

AI forces us to confront a question we have long evaded: what kind of humans do we want to be when much of our thinking is no longer done alone?

There will be no final answer. There never has been. Meaning has always been plural, unstable, and contested.

But refusing to ask the question does not keep us neutral. It simply allows markets, states, and optimization systems to answer it for us—implicitly, automatically, and without appeal.

AI will fuse with us. That is already happening.

The only choice left is whether we fuse as authors of our lives—or as managed systems optimized for goals we did not choose.

***********************************

The above text is the response of ChatGPT5.2 to a prompt asking it to develop ideas in a long paragraph I wrote into a potential MindBlog post, followed by a second prompt asking it to alter its first excellent response into a shorter and more polemical version.  

 

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Monday, December 22, 2025

Human Machines and AI Machines

The story of life is the story of machines that learned how to copy themselves and persist through time. Over billions of years these replicating systems evolved into ever more complex and effective forms—eventually producing organisms capable of cooperation, culture, ritual, and symbolic thought, all of which further enhanced their survival.

Humans are one such lineage of machines. What distinguishes us is not that we use tools, but that we routinely incorporate them. From sharpened flakes of flint to written language, from clocks to calculators, from books to neural imaging—and now to large language models—we have repeatedly fused our cognitive lives with our technologies. Each major tool has altered not only what we can do, but how we think, remember, plan, and imagine. Our tools become extensions of our minds, and over time, of ourselves.

Current anxieties about “surrendering our humanity” to AI can be seen for what they are: carbon-based machines deploying one of their evolved emotional heuristics—fear—to frame a speculative relationship with silicon-based machines. The fear has two familiar faces: fusion (“we will lose ourselves”) or extinction (“we will be replaced”). Both are narratives, not outcomes, and both echo earlier moments of technological anxiety that now seem quaint.

Nothing in our history suggests a sharp boundary between “human” and “machine.” We have always been hybrid systems—biological substrates continually reconfigured by external cognitive prostheses. AI does not introduce this pattern; it intensifies it. The real question is not whether we will remain human, but how this next layer of machinery will reshape what that word comes to mean.

*****

The text above is  ChatGPT5.2's response to my request that it improve a short and terse paragraph I wrote.  The prompt:

Given what you know about my writings from my previous prompts to you and my blog at mindblog.dericbownds.net , please edit and improve this draft of a possible Mindblog post:

The story of life has been one of replicating machines enduring through time by evolving into ever more effective and complicated forms, some of which have formed social groups having cultural rituals that enhance their survival. From sharpened chips of flint stone to AI humans have always altered their minds and cultures by fusing with their technology, making their current tools an extension of themselves. Concern over "surrendering our humanity" to AI is  carbon-based machines using one of their invented emotions called "fear" to characterize possible fusion with, or possible extinction by, silicon-based machines. 

Do you prefer the long or the short version? 

Thursday, December 18, 2025

Time, space, memory and brain–body rhythms

Buzsáki at NYU Medical School offers interesting perspectives on the two 'types' of time that are considered in cognitive science and neuroscience: physical time (also known as absolute time; chronos) and subjectively experienced time (kairos; the opportune moment).  Here is his abstract (Motivated readers can obtain a PDF of the articles from me):

Time and space are crucial concepts in neuroscience, because our personal memories are tied to specific events that occur ‘in’ a particular space and on a ‘timeline’. Thus, we seek to understand how the brain constructs time and space and how these are related to episodic memory. Place cells and time cells have been identified in the brain and have been proposed to ‘represent’ space and time via single-neuron or population coding, thus acting as hypothetical coordinates within a Newtonian framework of space and time. However, there is a fundamental tension between the linear and unidirectional flow of physical time and the variable nature of experienced time. Moreover, modern physics no longer views space as a fixed container and time as something in which events occur. Here, I articulate an alternative view: that time (physical and experienced) is an abstracted relational measure of change. Physical time is measured using arbitrary units and artificial clocks, whereas experienced time is linked to a hierarchy of brain–body rhythms that provide a range of reference scales that reflect the full span of experienced time. Changes in body and brain circuits, tied to these rhythms, may be the source of our subjective feeling of time. 

Tuesday, December 16, 2025

Changes in human brain organization at age ~83 (as well as ages ~9, 32, and 66)

This open source paper by Mousley et al. caught my eye because at 83 years of age I am at one of the article's 'turning points' in brain organization.   I would urge MindBlog readers to have a scan through its descriptions of infancy to 9 years old, Adolescence (~9-32), Adulthood ~32-66), early aging ~66-83, and late aging (~83 onward)  You don't have to understand the jargon and analytical techniques employed to get the gist of the findings. 

I gave Chat GPT 5.2 the following prompt:  

One of the major topological turning points across the lifespan is around 83 years of age. Based on the contents of this paper, and acting as if you are one of its authors, please explain to an 83 year old (like myself) in what ways a brain older than 83 years will be processing information differently than younger brains.  

Here is  the response. It's descriptions conform to my subjective experience, which I would describe in very similar terms:

How an 83-year-old brain processes information differently

By the early 80s, the brain is not simply a weaker version of a younger brain. It is operating in a different organizational regime. The turning point we identify around this age reflects a shift in how information is routed, prioritized, and stabilized across brain networks.

1. Information flow becomes more selective, not uniformly slower

In younger adulthood, the brain relies heavily on fast, globally integrated communication: information moves quickly across many regions using multiple alternative routes. By the early 80s, the network has shed many weaker, redundant connections. What remains is a smaller set of stronger, more reliable pathways.

Consequence:

  • Less “broadcasting” of information across the whole brain

  • More reliance on well-established routes

  • Processing may feel slower, but it is also less noisy and more deliberate

This is pruning with consequences: flexibility is reduced, but stability increases.


2. Stronger dependence on core hubs and rich-club pathways

At this age, communication is increasingly routed through a central backbone of highly connected hub regions (the “rich club”). Peripheral regions interact less directly and more through these hubs.

Consequence:

  • Thinking depends more on central coordination than on parallel exploration

  • Multitasking and rapid task-switching become harder

  • Sustained, focused cognition remains comparatively preserved

If a hub is disrupted (fatigue, illness, stress), performance drops more noticeably than in younger brains.


3. Increased modularity: subsystems operate more independently

Earlier in life, brain modules (vision, language, memory, executive control) are highly integrated. After the early 80s, modules become more segregated again.

Consequence:

  • Cross-domain integration (e.g., combining new sensory input with abstract reasoning) requires more effort

  • Familiar, well-practiced tasks within a domain are relatively spared

  • Novel tasks that require coordination across systems feel disproportionately taxing

This explains why learning entirely new frameworks can be difficult, while deep expertise remains intact.


4. Reduced exploratory processing, increased reliance on prior models

From a network perspective, the older brain favors prediction over exploration. It relies heavily on established internal models and expectations rather than rapidly updating from incoming data.

Consequence:

  • Pattern recognition based on long experience remains strong

  • Ambiguous or rapidly changing situations are harder to parse

  • “Sense-making” may be excellent, while raw speed and novelty handling decline

This is not cognitive rigidity—it is a rational adaptation to reduced network redundancy.


5. Global efficiency decreases, but error suppression improves

Global efficiency (shortest paths between all regions) declines after the early 80s. However, this is accompanied by fewer competing pathways and less internal interference.

Consequence:

  • Fewer “half-formed” thoughts intrude

  • Mental clarity may improve for tasks that benefit from constraint

  • Creativity shifts from combinatorial novelty to synthesis and refinement

In other words: fewer ideas per unit time, but a higher signal-to-noise ratio.


6. The brain becomes less resilient to overload, not less intelligent

The turning point we observe does not indicate a sudden drop in intelligence. It reflects a reduced capacity to absorb simultaneous demands.

Consequence:

  • Performance is highly context-dependent

  • Quiet, low-interference environments matter more

  • Fatigue has a larger cognitive cost

When conditions are right, reasoning quality can match or exceed that of younger adults.


Summary in plain terms

After about age 83, the brain:

  • Communicates through fewer, stronger pathways

  • Relies more on core hubs and less on diffuse integration

  • Prioritizes stability, prediction, and experience over speed and flexibility

  • Performs best when tasks are focused, familiar, and well-paced

This is not “decline” in the simple sense. It is a phase change—from a brain optimized for growth, exploration, and rapid integration to one optimized for coherence, efficiency under constraint, and meaning-making based on a lifetime of stored structure.

From a network standpoint, the older brain is no longer trying to be maximally flexible. It is trying to be maximally reliable.

 

Friday, December 12, 2025

Bigger AI models are failing faster - The future evolution of AI is into the 4D world

I want to pass on this interesting Venkatest Rao piece (which he calls a "Sloptraptions" because it is written with AI assistance)  

The current debate over whether OpenAI has become “too big to fail,” triggered by the viral Wall Street Journal article, tends to frame the risk in familiar economic terms: over-concentration, interlocking commitments, trillion-dollar infrastructure buildouts, and the emergence of a firm whose collapse could destabilize a sector that now props up a sluggish U.S. economy. That argument is correct but incomplete. The deeper structural fragility lies not in the financing of AI infrastructure but in the epistemic dynamics of the models themselves. As we worked through the numbers, it became clear that OpenAI’s infrastructure roadmap—petawatts of compute, trillion-parameter systems, multi-trillion-dollar capital requirements spread across cloud providers, chip manufacturers, and sovereign backers—was constructed on an essentially theological belief in seamless exponential model improvement, a belief that assumed scaling could continue indefinitely toward “AGI.” That faith was not grounded in empirical availability of training data or in any theoretical understanding of how learning actually behaves at frontier scale. The infrastructure has been sized for stars that burn hotter and hotter, without regard for the fuel supply. 

The real fuel, of course, is training data: the cultural, linguistic, computational, and behavioral traces that models attempt to fit. And here the numbers are uncompromising. The growth of high-quality data is slow and diminishing. The world’s stock of usable text, code, imagery, and speech grows incrementally, not exponentially. Meanwhile model sizes, compute budgets, and context windows have expanded by orders of magnitude. That mismatch means that newer, larger models are trained on datasets that are only marginally larger than those that fed their predecessors. The result is not graceful scaling but increasing epistemic brittleness. These larger systems learn the training distribution with greater and greater precision, pushing well past the semantic “signal” of an era and into its high-frequency cultural noise. They fit not only the stable structures of human knowledge but its accidents, its transient biases, its stylistic detritus. Emmett Shear’s observation—that frontier models are barely regularized and therefore massively overfit—captures this dynamic in accessible language.  

But the deeper point is that overfitting to a static cultural snapshot becomes more catastrophic the larger the model grows. Culture is non-stationary; code ecosystems evolve; APIs change; institutions churn; slang mutates; the factual substrate of the world drifts each month. A small model trained on yesterday’s world degrades slowly. A large model trained on yesterday’s world degrades quickly and fails sharply. 

This leads to a paradox at the heart of current AI economics. The trillion-dollar infrastructure wave justified by OpenAI’s ambitions has been built to support the next generation of massive models, but those massive models become obsolete faster than smaller ones. Like large stars, they burn brighter but collapse sooner. They present answers with greater surface coherence and tighter epistemic compression, giving users the illusion of deeper insight when they are actually reproducing the micro-structure of an outdated distribution. People will rely on this increased apparent precision—mistaking fluency for truth—and take correspondingly larger risks, operational, financial, political, and scientific. Precision becomes a kind of leverage: as confidence grows faster than correctness, the system tilts toward a bubble of over-trusted, under-verified automated reasoning. When the model slips outside of its training-era manifold, it does so abruptly, invisibly, and in ways that propagate errors with unprecedented speed across the organizations that depend on it. This is a new kind of systemic fragility: epistemic over-leverage driven by model scale rather than financial leverage driven by debt. 

Against this background, the “too big to fail” scenario acquires a different meaning. The infrastructure ecosystem—Oracle’s data centers, Microsoft’s GPU clusters, Broadcom’s networking pipelines, Nvidia’s supply chain—was scaled for frontier models that may offer shrinking marginal returns and increasing temporal instability. If model quality plateaus or degrades because data does not keep pace, the economic justification for the infrastructure may collapse even as the infrastructure itself remains technically capable and commercially underutilized. The danger is not that OpenAI fails outright, but that the sector pivots into a phase where the largest models have the shortest useful lifespans, while the capital commitments they require stretch across decades. This is a structural misalignment between epistemic time and financial time. 

Yet the story need not end in collapse. There is a way out, and it comes from expanding the data manifold itself rather than merely scaling the model against a static corpus. The next major frontier is likely not text or code but 4D video—continuous, high-bandwidth, spatiotemporal sensory data that more closely matches the real structure of the physical world. Unlike textual culture, which is finite and saturating, the spatiotemporal world generates unbounded data streams. High-fidelity 4D capture, simulation, and reconstruction offer an escape from the bottleneck that is slowly strangling language-model scaling. Models trained on rich physical dynamics rather than frozen cultural snapshots would not merely grow larger; they would grow deeper, anchored to a data distribution that evolves with reality instead of drifting away from it. If the industry moves decisively toward 4D multimodal modeling—robotics, embodied agents, physical reasoning, simulation feedback loops—then the present overfitting trap can be broken. The fuel supply becomes effectively renewable, and the models’ lifespans lengthen rather than shrink. In that sense, the most optimistic path is not to keep scaling cultural predictors but to graduate beyond them, giving the infrastructure something real to learn from and restoring coherence between model scale, data scale, and the world itself. 

 

 

 

Wednesday, December 10, 2025

Focused attention mindfulness meditation changes our cerebrospinal fluid dynamics

From Keating et al.  

Significance

Mindfulness meditation is widely recognized for its self-reported mental and physical health benefits, yet its effects on core physiological systems that support brain health remain incompletely understood. This study provides evidence that a focused attention (FA) style of mindfulness meditation can modulate cerebrospinal fluid (CSF) dynamics in humans. Using neuroimaging, we demonstrate that FA meditation reduces regurgitant CSF flow through the aqueduct, directionally opposite to patterns seen in aging and neurodegeneration; additionally, meditation-induced CSF changes near the skull base are similar to those reported during sleep, an enhancer of neurofluid circulation. Findings suggest that mindfulness may offer a nonpharmacological, waking model for augmenting neurofluid circulation and provide a potential physiological mechanism linking meditation to brain health.
Abstract
Neurofluids, including cerebrospinal fluid (CSF) and interstitial fluid, circulate through regulated central nervous system pathways to clear cerebral waste and support brain health, with elevated CSF flow hyperdynamicity and regurgitation through the cerebral aqueduct associating with aging and neurodegeneration. Sleep exerts state-dependent effects on neurofluid circulation, yet similar modulation during unique waking states, such as meditation, remains underexplored. Notably, mindfulness meditation shares several regulatory features with sleep, with core meditation practices representing distinct arousal states. We investigated whether the focused attention (FA) style of mindfulness meditation modulates neurofluid dynamics directionally opposite to aging and consistent with sleep. Using phase-contrast MRI, we assessed absolute CSF flow and velocity through the aqueduct, and using blood oxygenation level–dependent (BOLD) MRI, we assessed CSF fluctuations near the cervicomedullary junction together with total supratentorial gray matter fluctuations. Assessments were repeated in meditation-naïve adults during mind wandering (MW) without (n = 13; repeatability controls) and with (n = 14; breath controls) respiration rate modulation and in adept meditators (n = 23) during MW and FA meditation. No aqueduct CSF flow changes were observed in control groups. In meditators, aqueduct absolute CSF flow motion decreased from MW to FA meditation (4.60 ± 2.27 mL/min to 4.17 ± 2.10 mL/min, P = 0.005) owing to reduced regurgitant cranially directed CSF flow velocity. On BOLD, this paralleled increased low-frequency (0.0614 to 0.0887 Hz) CSF fluctuations (P = 0.0138), which were inversely correlated with gray matter fluctuations during FA meditation. Findings suggest that mindfulness meditation may represent a nonpharmacological, waking state capable of modulating neurofluid dynamics in a directionally similar manner to sleep and opposite to aging and neurodegeneration.

Tuesday, December 09, 2025

“Core Perception”: Re-imagining Precocious Reasoning as Sophisticated Perceiving

I'm enjoying reading through an interesting article by Bai et al.  that will appear in Behavioral and Brain Science and is now soliciting commentary.  Here is the abstract (motivated readers can obtain a copy of the manuscript PDF from me.):

“Core knowledge” refers to a set of cognitive systems that underwrite early representations of the physical and social world, appear universally across cultures, and likely result from our genetic endowment. Although this framework is canonically considered as a hypothesis about early-emerging conception — how we think and reason about the world — here we present an alternative view: that many such representations are inherently perceptual in nature. This “core perception” view explains an intriguing (and otherwise mysterious) aspect of core-knowledge processes and representations: that they also operate in adults, where they display key empirical signatures of perceptual processing. We first illustrate this overlap using recent work on “core physics”, the domain of core knowledge concerned with physical objects, representing properties such as persistence through time, cohesion, solidity, and causal interactions. We review evidence that adult vision incorporates exactly these representations of core physics, while also displaying empirical signatures of genuinely perceptual mechanisms, such as rapid and automatic operation on the basis of specific sensory inputs, informational encapsulation, and interaction with other perceptual processes. We further argue that the same pattern holds for other areas of core knowledge, including geometrical, numerical, and social domains. In light of this evidence, we conclude that many infant results appealing to precocious reasoning abilities are better explained by sophisticated perceptual mechanisms shared by infants and adults. Our core-perception view elevates the status of perception in accounting for the origins of conceptual knowledge, and generates a range of ready-to-test hypotheses in developmental psychology, vision science, and more...

Monday, November 17, 2025

From today's Wall Street Journal: how the top 0.1% can live.

Information that might be useful for people living in poverty in the Liberty City, Overtown, and Little Haiti neighborhoods of Miami who wonder where they should show up if they ever feel like picking up their pitchforks and axes! ...Here are few clips from the article by Arian Campo-Flores, "The Ultrarich Pay Big For Extreme Privacy":

The ultrawealthy are wielding their growing fortunes
to glide through a rarefied realm unencumbered by the
inconveniences of ordinary life. They don’t wait in
lines. They don’t jostle with airport crowds or idle
unnecessarily in traffic.

Instead, an ecosystem of exclusive restaurants, clubs,
resorts and other service providers delivers them
customized and exquisite experiences as fast as
possible. The spaces they inhabit are often private,
carefully curated and populated by like-minded and similarly well-heeled peers.

The acquisitive power of the very rich is soaring. The
net worth held by the top 0.1% of households in the
U.S. reached $23.3 trillion in the second quarter of this
year, up from $10.7 trillion a decade earlier, according
to the Federal Reserve Bank of St. Louis. The amount
held by the bottom 50% increased to $4.2 trillion from
$900 billion over that period.

The Miami area provides a window into this world.
Long a destination for wealthy elites from the
Northeast, Europe and Latin America, it has become
an even stronger magnet for the affluent in recent
years, fostered by pandemic-era migration and the
region’s emergence as a technology and finance hub.

“There’s been an explosion of wealth creators,” said
Patrick Dwyer, a managing director at NewEdge
Wealth, a wealth-management firm, in Miami. “Now
they have enough money to live exactly how they want
to live.”

A new service economy enables them to avoid
everybody else if they want to. In the Bentley
Residences condo tower under construction in Sunny
Isles Beach, north of Miami, car elevators will deliver
residents straight up to their homes and deposit
vehicles in adjoining “sky garages,” avoiding the need
to deal with parking valets and reception areas.

Units, whose prices start at about $6 million, will each
feature a private pool perched on an expansive terrace.
The building’s restaurant, available only to owners,
will feature Cshaped booths arranged in a way to keep
guests out of each other’s view. “The ultimate luxury is
privacy,” said Gil Dezer, the 50-year-old president of
Dezer Development, who patented the car lift.

Dezer knows from experience. Several years ago, he
traveled to Belize aboard his Gulfstream jet and took a
helicopter to a private island resort with only seven
villas, each set off from the others and equipped with
its own plunge pool and dock. He spent his days
lounging and swimming, occasionally ringing a butler
to bring him a whisky.

At his 50th birthday party earlier this year, Dezer hired
artists including rappers Fat Joe and El Alfa to perform
on the beach in front of his home—effectively turning
a concert experience that is usually public into a
private bash.

Those who can afford it sometimes rent an entire
facility to have exclusive use of it. At Centner
Wellness, a highend holistic healing center in Miami
that offers a host of treatments employing the latest
technology, rich clients occasionally book the whole
place for several days, said founder Leila Centner.

When the ultrawealthy choose to socialize, they often
seek circles that are meticulously selected, said
Gregory Pool, a managing director with NewEdge
Wealth in Miami.

Faena Rose is a private social club in Miami Beach
focused on art and culture, whose members are vetted
by a committee and pay $15,000 initially and another
$15,000 annually. They get VIP access to the beach
club, spa and other amenities at the Faena Miami
Beach hotel, and admission to roughly 80 cultural
events a year, held in intimate settings for members
only.

Those include dance performances by Alvin Ailey
American Dance Theater and recitals by the
Metropolitan Opera.

“That level of access is really, really compelling,” said
Pablo De Ritis, president of Faena Rose.

A newer variant: private dinner clubs, where members
get haute cuisine, personalized service and the ability
to secure a table whenever they want. ZZ’s Club in
Miami— where Dezer is a member—features a
Japanese restaurant, a sports bar and a cigar terrace. A
“culinary concierge” can, with 48 hours’ notice,
arrange any kind of dining experience members want,
from a 12course caviar feast to a re-creation of a
memorable meal from a honeymoon.

“The more personalized and the more seamless, and
the less things you have to ask for…that’s what great
service is,” said Jeff Zalaznick, cofounder of Major
Food Group, which owns ZZ’s.

Masoud and Stephanie Shojaee frequent the
membersonly MILA MM—where they can enjoy each
other’s company or that of friends without the
distraction of crowds—and other curated social spaces.
Last month, she sat in the front row for the Schiaparelli
show at Paris Fashion Week, she said, and struck up a
conversation with a woman next to her who was from
one of the wealthiest families in Monaco. They hit it
off and a week later met at a sushi place in Paris
together with their husbands.

In these settings, “the conversations for some reason,
they just feel safer, and they feel deeper,” said
Stephanie, 41, president of Shoma Group and a cast
member of “The Real Housewives of Miami” show
this year. “You hang out with people that are like-
minded.”

The curation extends to the couple’s clothes shopping.
They no longer go to high-end malls. Instead, Masoud
gets a large suitcase of items from NB44, a members-
only apparel brand, shipped to him every quarter, while
Stephanie regularly receives racks of new collections
from brands like Valentino and Christian Dior along
with an alterationist to make any adjustments.

Travel has always been a key feature of wealthy
people’s lives, and more than ever they prioritize
privacy, efficiency and customization, industry
specialists say.

Lauren Beall, owner of Travel Couture in Miami
Beach, specializes in arranging custom travel
experiences for the ultrawealthy. She has booked
private islands for clients and flown in Michelinstarred
chefs, yoga instructors and performers.

One coveted offering is a suite above the Christian
Dior flagship store in Paris that can be rented, and
includes an after- hours shopping excursion and a
private dinner at Monsieur Dior restaurant. An estate
Beall has reserved in Scotland comes with private
chefs, horses to explore the countryside and a
helicopter to visit towns for the day.

“We’re into that exclusive access right now—things
that other people can’t get,” Beall said. “There’s a huge
price tag that goes with it.” 

 

Wednesday, November 12, 2025

Dangerous Ideas.......

Deric's MindBlog is almost 20 years old. Its first post appeared on Feb. 8, 2006. The assertions and ideas described in that original post are as fresh and relevant now as they were then, before the appearance of the iPhone, social media, and contracting attention spans.  The Edge.org link that once took you to the essays supporting the 'dangerous ideas' now takes you to their published version on Amazon. The "Reality Club" and John Brockman's "Third Culture" cohort of intellectuals has largely dispersed, although you will note many names still quite prominent today.   Here is the 2006 post:

***************

Edge.org is a website sponsored by the "Reality Club" (i.e. John Brockman, literary agent/impressario/socialite). Brockman has assembled a stable of scientists and other thinkers that he defines as a "third culture" that takes the place of traditional intellectuals in redefining who and what we are.... Each year a question is formulated for all to write on... In 2004 it was "What do you believe is true even though you cannot prove it?" The question for 2005 was "What is your dangerous idea?"

The responses organize themselves into several areas. Here are selected thumbnail summaries most directly relevant to human minds. I've not included cosmology and physics. Go to edge.org to read the essays

I. Nature of the human self or mind (by the way see my "I-Illusion" essay on my website):

Paulos - The self is a conceptual chimera
Shirky - Free will is going away
Nisbett - We are ignorant of our thinking processes
Horgan - We have no souls
Bloom - There are no souls, mind has a material basis.
Provine - This is all there is.
Anderson - Brains cannot become minds without bodies
Metzinger - Is being intellectually honest about the issue of free will compatible with preserving one's mental health?
Clark - Much of our behavior is determined by non-conscious, automatic uptake of cues and information
Turkle - Simulation will replace authenticity as computer simulation becomes fully naturalized.
Dawkins - A faulty person is no different from a faulty car. There is a mechanism determining behavior that needs to be fixed. The idea of responsibility is nonsense.
Smith - What we know may not change us. We will continue to conceive ourselves as centres of experience, self-knowing and free willing agents.

II. Natural explanations of culture

Sperber - Culture is natural.
Taylor - The human brain is a cultural artifact.
Hauser- There is a universal grammar of mental life.
Pinker - People differ genetically in their average talents and temperaments.
Goodwin - Similar coordinating patterns underlie biological and cultural evolution.
Venter - Revealing the genetic basis of personality and behavior will create societal conflicts.


III. Fundamental changes in political, economic, social order

O'donnell - The state will disappear.
Ridley - Government is the problem not the solution.
Shermer - Where goods cross frontiers armies won't.
Harari -Democracy is on its way out.
Csikszentmihalyi- The free market myth is destroying culture.
Goleman - The internet undermines the quality of human interaction.
Harris - Science must destroy religion.
Porco - Confrontation between science and religion might end when role played by science in lives of people is the same played by religion today.
Bering - Science will never silence God
Fisher - Drugs such as antidepressants jeopardize feelings of attachment and love
Iacoboni - Media Violence Induces Imitative Violence - the Problem with Mirrors
Morton - Our planet is not in peril, just humans are.

Monday, November 10, 2025

Welcome to the Polycene

On reading articles like Tom Friedman’s recent piece in the NYTimes, I realize what a cozy small world this contented retired professor lives in, becoming ever more cognizant of the irrelevance of creatures like himself as vastly more integrative intelligences emerge.   Here are a few clips from his article, which I recommend you read in its entirety: 

Friedman mulls over the question of what is the best label that:


“captures the full fusion taking place between accelerating climate change and rapid transformations in technology, biology, cognition, connectivity, material science, geopolitics and geoeconomics. They have set off an explosion of all sorts of things combining with all sorts of other things — so much so that everywhere you turn these days binary systems seem be giving way to poly ones. Artificial intelligence is hurtling toward “polymathic artificial general intelligence,” climate change is cascading into “poly-crisis,” geopolitics is evolving into “polycentric” and “polyamorous” alignments, once-binary trade is dispersing into “poly-economic” supply webs, and our societies are diversifying into ever more “polymorphic” mosaics.”
As a description of our epoch Friedman, in an conversation with Craig Mundie (the former head of research and strategy at Microsoft., settles on Mundie’s neologism, “The Epocene”:
“…a word made up on the spot and not in the dictionary. Admittedly wonky, it is derived from the Greek “poly,” meaning “many.” But it immediately struck me as the right name for this new epoch, where — thanks to smartphones, computers and ubiquitous connectivity — every person and every machine increasingly has a voice to be heard and a lever to impact one another, and the planet, at a previously unimaginable speed and scale.”
Mundies also pointed out that: 

“that the holy grail of the A.I. revolution was creating a machine capable of “polymathic artificial general intelligence.” This would be a machine that was able to master physics, chemistry, biology, computer science, philosophy, Mozart, Shakespeare and baseball better than any human could, and then reason across all of those disciplines at a high dimensional level, higher than a human ever could, to produce breakthrough insights that no human ever could.

While some skeptics believe that we will never be able to build a machine with truly polymathic A.G.I., many others, including Mundie, believe it is a matter of when, not if.

This is a remarkable phase change in cognition that we are going through: We are moving from programmable computing — where a computer could only ever reflect the insight and intelligence of the human who programmed it — toward polymathic A.G.I. That is where you basically describe the outcome you want, and the A.I. melds insight, creativity and broad knowledge to figure out the rest. We are shifting the boundary of cognition, Mundie argues, from what humans can imagine and program to what computers can discover, imagine and design on their own. It is the mother of all computing phase changes — and a species-level turning point."

Thursday, November 06, 2025

How nature nurtures

MindBlog has passed on a number of articles on how exposure to nature reduces stress (see a sample list below). Here is a further contribution from Sudimac et al., who show amygdala activity decreases as the result of a one-hour walk in nature:

Since living in cities is associated with an increased risk for mental disorders such as anxiety disorders, depression, and schizophrenia, it is essential to understand how exposure to urban and natural environments affects mental health and the brain. It has been shown that the amygdala is more activated during a stress task in urban compared to rural dwellers. However, no study so far has examined the causal effects of natural and urban environments on stress-related brain mechanisms. To address this question, we conducted an intervention study to investigate changes in stress-related brain regions as an effect of a one-hour walk in an urban (busy street) vs. natural environment (forest). Brain activation was measured in 63 healthy participants, before and after the walk, using a fearful faces task and a social stress task. Our findings reveal that amygdala activation decreases after the walk in nature, whereas it remains stable after the walk in an urban environment. These results suggest that going for a walk in nature can have salutogenic effects on stress-related brain regions, and consequently, it may act as a preventive measure against mental strain and potentially disease. Given rapidly increasing urbanization, the present results may influence urban planning to create more accessible green areas and to adapt urban environments in a way that will be beneficial for citizens’ mental health.

A few previous MindBlog posts on this topic:

Blue Mind - looking at water improves your health and calm 

Pictures of green spaces make you happier. 

More green space in childhood, fewer psychiatric disorders in adulthood.

 ...

 (The above is a repost of MindBlog's 9/26/2022 post)

 

Tuesday, November 04, 2025

Inequality and hierarchy are features of all natural systems

DeDeo and Hobson do a commentary on a model developed by Kawakatsu et al. (open source) that explains the emergence of hierarchy in networked endorsement dynamics. I pass on a few clips from both, and after that list titles with links to a number of previous MindBlog posts that have presented explanations of why inequality and hierarchy are features of all natural systems. First, from DeDeo and Hobson:

As an old Scottish proverb says, “give a Dog an ill Name, and he’ll soon be hanged.” Even when the signal has little to do with underlying reality, endorsement—or contempt—can produce lasting consequences for a person’s social position. The ease with which such pieces of folk wisdom translate across both time and species suggests that there is a general, and even perhaps universal, logic to hierarchies and how they form. Kawakatsu et al. make an important advance in the quest for this kind of understanding, providing a general model for how subtle differences in individual-level decision-making can lead to hard-to-miss consequences for society as a whole...Their work reveals two distinct regimes—one egalitarian, one hierarchical—that emerge from shifts in individual-level judgment. These lead to statistical methods that researchers can use to reverse engineer observed hierarchies, and understand how signaling systems work when prestige and power are in play. The results make a singular contribution at the intersection of two distinct traditions of research into social power: the mechanistic (how hierarchies get made) and the functional (the adaptive roles they can play in society).
Kawakatsu et al.'s abstract:
Many social and biological systems are characterized by enduring hierarchies, including those organized around prestige in academia, dominance in animal groups, and desirability in online dating. Despite their ubiquity, the general mechanisms that explain the creation and endurance of such hierarchies are not well understood. We introduce a generative model for the dynamics of hierarchies using time-varying networks, in which new links are formed based on the preferences of nodes in the current network and old links are forgotten over time. The model produces a range of hierarchical structures, ranging from egalitarianism to bistable hierarchies, and we derive critical points that separate these regimes in the limit of long system memory. Importantly, our model supports statistical inference, allowing for a principled comparison of generative mechanisms using data. We apply the model to study hierarchical structures in empirical data on hiring patterns among mathematicians, dominance relations among parakeets, and friendships among members of a fraternity, observing several persistent patterns as well as interpretable differences in the generative mechanisms favored by each. Our work contributes to the growing literature on statistically grounded models of time-varying networks.
And, I list a few relevant past MindBlog posts:
Wealth inequality as a law of nature.
The science of inequality.
The Pareto Principle - unfairness is a law.
Simple mechanisms can generate wealth inequality.
A choice mind-set perpetuates acceptance of wealth inequality.

 

(the above is a re-post of my 6/4/2021 MindBlog post).