Wednesday, June 30, 2021

Seven nuggets on how we confuse ourselves about our brains and our world.

In a series of posts starting on Nov. 27, 2020 I attempted to abstract and condense the ideas in Lisa Feldman Barrett’s 2017 book “How Emotions Are Made: The Secret Life of the Brain”. That book is a hard slog, as was my series of posts on its contents. Barrett also did her own condensation in her followup book, “Seven and a Half Lessons About the Brain,” that appeared in late 2020 at the same time as my posts, and I’ve finally gotten around to scanning through it. I want to pass on her brief epilogue that extracts a few crisp nuggets from her lessons:
ONCE UPON A TIME, you were a little stomach on a stick, floating in the sea. Little by little, you evolved. You grew sensory systems and learned that you were part of a bigger world. You grew bodily systems to navigate that world efficiently. And you grew a brain that ran a budget for your body. You learned to live in groups with all the other little brains-in-bodies. You crawled out of the water and onto land. And across the expanse of evolutionary time - with the innovation that comes from trial and error and the deaths of trillions of animals - you ended up with a human brain. A brain that can do so many impressive things but at the same time severely misunderstands itself.
-A brain that constructs such rich mental experiences that we feel like emotion and reason wrestle inside us 
-A brain that’s so complex that we describe it by metaphors and mistake them for knowledge 
-A brain that’s so skilled at rewiring itself that we think we’re born with all sorts of things that we actually learn 
-A brain that’s so effective at hallucinating that we believe we see the world objectively, and so fast at predicting that we mistake our movements for reactions 
-A brain that regulates other brains so invisibly that we presume we’re independent of each other 
-A brain that creates so many kinds of minds that we assume there’s a single human nature to explain them all 
-A brain that’s so good at believing its own inventions that we mistake social reality for the natural world
We know much about the brain today, but there are still so many more lessons to learn. For now, at least, we’ve learned enough to sketch our brain’s fantastical evolutionary journey and consider the implications for some of the most central and challenging aspects of our lives.
Our kind of brain isn’t the biggest in the animal kingdom, and it’s not the best in any objective sense. But it’s ours. It’s the source of our strengths and our foibles. It gives us our capacity to build civilizations and our capacity to tear down each other. It makes us simply, imperfectly, gloriously human.

Monday, June 28, 2021

In our brains everything changes

Sometimes learning the hard neuroscience of how our brains work leaves me feeling a bit queasy. The first time this happened was when I learned about the Libet experiments that showed that cells in our motor cortex start a movement well before we ‘decide’ to initiate it. “We” think we are initiating a movement when in fact “it” (those brain cells) are already well on their way to doing it. So what happened to my ‘free will’? Well...there is a work around for that problem that I explain in my “I Illusion” and subsequent web lectures. 

 A further uncomfortable jolt comes on seeing evidence the brain cells that become active during a familiar experience can change over time. Each instance of the recall of an important event can recruit a different group of nerve cells, because each time the memory is fetched from the neuronal ‘library’ it gets put back, sometimes slightly altered, in different nerve cell collections and connections. A very striking example of this has been provided by Schoonover et al. Who show that the network of nerve cells active when a particular smell triggers a specific behavior changes over time, moving to different brain areas. This is an example of ‘representational plasticity’ which is discussed in a review article by Rule et al. 

This conflicts with our common sense view of how our minds should work. If you have an experience and then later remember it, you must have put it somewhere in your brain’s library of nerve cell connections, like a book on a library shelf, so that all you have to do to remember something is go fetch it. If the experience is an emotional one it couples with a hard wired circuit for that emotion. This essentialist view of how our minds work is being thoroughly displaced as experimental evidence continues to accumulate showing that in each moment we are constructing our experience anew - reminding of the Buddhist saying that the river you view flowing past is never the same twice. The series of MindBlog posts (starting here) on the work and ideas of Barrett covers this material. 

It is from constant change and flux in our evolved neuroendocrine circuitry that we generate the illusion of certainty or constancy - expectations of selves, rules, objects, and emotions that stay in place. We model the world we expect to see before each moment we are about to enter. If our expectations are not met, then our brains perk up to adjust them appropriately.  


Friday, June 25, 2021

Lack of mathematical education impacts brain development and future attainment

From Zacharopoulos et al.:  


Our knowledge of the effect of a specific lack of education on the brain and cognitive development is currently poor but is highly relevant given differences between countries in their educational curricula and the differences in opportunities to access education. We show that within the same society, adolescent students who specifically lack mathematical education exhibited reduced brain inhibition levels in a key brain area involved in reasoning and cognitive learning. Importantly, these brain inhibition levels predicted mathematical attainment ∼19 mo later, suggesting they play a role in neuroplasticity. Our study provides biological understanding of the impact of the lack of mathematical education on the developing brain and the mutual play between biology and education.
Formal education has a long-term impact on an individual’s life. However, our knowledge of the effect of a specific lack of education, such as in mathematics, is currently poor but is highly relevant given the extant differences between countries in their educational curricula and the differences in opportunities to access education. Here we examined whether neurotransmitter concentrations in the adolescent brain could classify whether a student is lacking mathematical education. Decreased γ-aminobutyric acid (GABA) concentration within the middle frontal gyrus (MFG) successfully classified whether an adolescent studies math and was negatively associated with frontoparietal connectivity. In a second experiment, we uncovered that our findings were not due to preexisting differences before a mathematical education ceased. Furthermore, we showed that MFG GABA not only classifies whether an adolescent is studying math or not, but it also predicts the changes in mathematical reasoning ∼19 mo later. The present results extend previous work in animals that has emphasized the role of GABA neurotransmission in synaptic and network plasticity and highlight the effect of a specific lack of education on MFG GABA concentration and learning-dependent plasticity. Our findings reveal the reciprocal effect between brain development and education and demonstrate the negative consequences of a specific lack of education during adolescence on brain plasticity and cognitive functions.

Wednesday, June 23, 2021

Decision-making ability, psychopathology, and brain connectivity

An open access review offered by Dolan and his colleagues continues the story of correlating our human competencies with our brain structures. They describe
...a new cognitive construct—decision acuity—that captures global decision-making ability. High decision acuity prominently reflected low decision variability. Decision acuity showed acceptable reliability, increased with age, and was associated with mental health symptoms independently of intelligence. Crucially, it was associated with distinctive resting-state networks, in particular in brain regions typically engaged by decision-making tasks. The association between decision acuity and functional connectivity was temporally stable and distinct from that of IQ.

• Young people have a general decision-making ability, which we call “decision acuity” 
• Decision acuity is reflected in how strongly connected certain brain networks are 
• Low decision acuity is associated with general social function psychopathology
Decision-making is a cognitive process of central importance for the quality of our lives. Here, we ask whether a common factor underpins our diverse decision-making abilities. We obtained 32 decision-making measures from 830 young people and identified a common factor that we call “decision acuity,” which was distinct from IQ and reflected a generic decision-making ability. Decision acuity was decreased in those with aberrant thinking and low general social functioning. Crucially, decision acuity and IQ had dissociable brain signatures, in terms of their associated neural networks of resting-state functional connectivity. Decision acuity was reliably measured, and its relationship with functional connectivity was also stable when measured in the same individuals 18 months later. Thus, our behavioral and brain data identify a new cognitive construct that underpins decision-making ability across multiple domains. This construct may be important for understanding mental health, particularly regarding poor social function and aberrant thought patterns.

Monday, June 21, 2021

Giving help to others may increase your life span.

An interesting analysis from Chen et al.


Social support is a key contributor to mortality risk, with effects comparable in magnitude (though opposite in direction) to smoking and obesity. Research has largely focused on either support received or support given; yet, everyday social relationships typically involve interchanges of support rather than only giving or only receiving. Using a longitudinal US national sample, this article elucidates how the balance of social support (amount of giving one does on a monthly basis relative to receiving support) relates to all-cause mortality over a 23-y follow-up period. Although correlational, one possible implication of the findings is that encouraging individuals to give support (e.g., helping others with errands) in moderation, while also being willing to accept support, may have longevity benefits.
While numerous studies exist on the benefits of social support (both receiving and giving), little research exists on how the balance between the support that individuals regularly give versus that which they receive from others relates to physical health. In a US national sample of 6,325 adults from the National Survey of Midlife Development in the United States, participants were assessed at baseline on hours of social support given and received on a monthly basis, with all-cause mortality data collected from the National Death Index over a 23-y follow-up period. Participants who were relatively balanced in the support they gave compared to what they received had a lower risk of all-cause mortality than those who either disproportionately received support from others (e.g., received more hours of support than they gave each month) or disproportionately gave support to others (e.g., gave many more hours of support a month than they received). These findings applied to instrumental social support (e.g., help with transportation, childcare). Additionally, participants who gave a moderate amount of instrumental social support had a lower risk of all-cause mortality than those who either gave very little support or those who gave a lot of support to others. Associations were evident over and above demographic, medical, mental health, and health behavior covariates. Although results are correlational, one interpretation is that promoting a balance, in terms of the support that individuals regularly give relative to what they receive in their social relationships, may not only help to strengthen the social fabric of society but may also have potential physical health benefits.

Friday, June 18, 2021

Our 'Self' extends vastly beyond our brain.

I want to pass on two interesting articles that review how the self we usually take to be largely inside our heads (somewhere behind the eyes) in fact has meaning only in contexts that extend vastly beyond the little grey cells in our cranium. Annie Murphy Paul notes four basic extensions that let our brains be less workhorse, and more orchestra conductor.
...the first and most obvious being our tools. Technology is designed to fulfill just this function — who remembers telephone numbers anymore, now that our smartphones can supply them?

Our external memory stores have evolved from marks on clay tablets through printed books to bytes stored in the cloud. 

A second resource is our bodies:

The burgeoning field of embodied cognition has demonstrated that the body — its sensations, gestures and movements — plays an integral role in the thought processes that we usually locate above the neck. The body is especially adept at alerting us to patterns of events and experience, patterns that are too complex to be held in the conscious mind. When a scenario we encountered before crops up again, the body gives us a nudge: communicating with a shiver or a sigh, a quickening of the breath or a tensing of the muscles. Those who are attuned to such cues can use them to make more-informed decisions. A study led by a team of economists and neuroscientists in Britain, for instance, reported that financial traders who were better at detecting their heartbeats — a standard test of what is known as interoception, or the ability to perceive internal signals — made more profitable investments and lasted longer in that notoriously volatile profession.
This second extension is the subject of the other article I want to mention, in which Emily Underwood does a review of communication between the brain and other organs, mediated by the vagus nerve, that shapes how we think, remember, and feel (not open source, but motivated readers can obtain a copy by emailing me).
Scientists are unraveling how our organs talk to the brain and how the brain talks back. That two-way communication, known as interoception, encompasses a complex system of nerves and hormones, including the vagus nerve, a massive network of fibers that travel from nearly every internal organ to the base of the brain and back again. Scientist have long known the vagus nerve carries signals between the organs and the brainstem. But new studies show signals carried by the vagus climb beyond the brainstem and into brain regions involved in memory, emotion, and decision-making. The research is challenging traditional distinctions between disorders of the brain and body, and may even hold clues to the nature of consciousness.
Now, back to Paul's article, and her third extension of our brain:
Another extraneural resource available for our use is physical space. Moving mental contents out of our heads and onto the space of a sketch pad or whiteboard allows us to inspect it with our senses, a cognitive bonus that the psychologist Daniel Reisberg calls “the detachment gain.”...Three-dimensional space offers additional opportunities for offloading mental work and enhancing the brain’s powers. When we turn a problem to be solved into a physical object that we can interact with, we activate the robust spatial abilities that allow us to navigate through real-world landscapes. This suite of human strengths, honed over eons of evolution, is wasted when we sit still and think.
A fourth extension of our minds...
...can be found in other people’s minds. We are fundamentally social creatures, oriented toward thinking with others. Problems arise when we do our thinking alone — for example, the well-documented phenomenon of confirmation bias, which leads us to preferentially attend to information that supports the beliefs we already hold. According to the argumentative theory of reasoning, advanced by the cognitive scientists Hugo Mercier and Dan Sperber, this bias is accentuated when we reason in solitude. Humans’ evolved faculty for reasoning is not aimed at arriving at objective truth, Mercier and Sperber point out; it is aimed at defending our arguments and scrutinizing others’. It makes sense, they write, “for a cognitive mechanism aimed at justifying oneself and convincing others to be biased and lazy. The failures of the solitary reasoner follow from the use of reason in an ‘abnormal’ context’” — that is, a nonsocial one.
All four of these extraneural resources — technology, the body, physical space, social interaction — can be understood as mental extensions that allow the brain to accomplish far more than it could on its own. This is the theory of the extended mind, introduced more than two decades ago by the philosophers Andy Clark and David Chalmers. A 1998 article of theirs published in the journal Analysis began by posing a question that would seem to have an obvious answer: “Where does the mind stop and the rest of the world begin?” They went on to offer an unconventional response. The mind does not stop at the usual “boundaries of skin and skull,” they maintained. Rather, the mind extends into the world and augments the capacities of the biological brain with outside-the-brain resources.
Compared to the attention we lavish on the brain, we expend relatively little effort on cultivating our ability to think outside the brain...The limits of this approach have become painfully evident. The days when we could do it all in our heads are over. Our knowledge is too abundant, our expertise too specialized, our challenges too enormous. The best chance we have to thrive in the extraordinarily complex world we’ve created is to allow that world to assume some of our mental labor. Our brains can’t do it alone.

Thursday, June 17, 2021

A.I. should be afraid of us.

Alan Burdick does a nice summary of some recent work on interactions between humans and artificial intelligence algorithms designed to appear empathetic:
Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent A.I., they are less likely to do so than if the bot were an actual person...We basically treat a perfect stranger better than A.I.
A study by Deroy and colleagues found
...that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.
That conclusion was borne out by reports afterward from the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”
This could have real-world implications. When we think about A.I., we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the A.I. doesn’t account for your bad behavior, an accident could ensue.
“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots...what guarantees that the behavior that gets repeated, and where you show less politeness, less moral obligation, less cooperativeness, will not color and contaminate the rest of your behavior when you interact with another human?"
There are similar consequences for A.I., too. “If people treat them badly, they’re programed to learn from what they experience,” she said. “An A.I. that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever...
There we have it: The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know that humanity has reached the pinnacle of achievement. By then, hopefully, A.I therapy will be sophisticated enough to help driverless cars solve their anger-management issues.

Wednesday, June 16, 2021

Your blood proteins can tell you the best kind of exercise for your body

Since I am heading into my 80th year, and realizing that any further years must be regarded as a gift from nature, I'm attentive to anything that NYTimes "Phys Ed" columnist Gretchen Reynolds (who is no spring chicken) writes about exercise and aerobic fittness (both of which are strongly linked to longevity.) Most recently, she describes work by Robbins et al. that finds a correlation between the levels of different blood proteins and how individual respond to exercise.
If we all begin the same exercise routine tomorrow, some of us will become much fitter, others will get a little more in shape, and a few of us may actually lose fitness. Individual responses to exercise can vary that wildly and, until now, unpredictably. But a fascinating new study of more than 650 men and women suggests that the levels of certain proteins in our bloodstreams might foretell whether and how we will respond to various exercise regimens.
Using state-of-the-art molecular tools, the scientists began enumerating the numbers and types of thousands of proteins in each of the 654 people’s bloodstreams. Then they tabulated those figures with data about everyone’s aerobic fitness before and after their five months of exercise...The levels of 147 proteins were strongly associated with people’s baseline fitness, the researchers found. If some of those protein numbers were high and others low, the resulting molecular profiles indicated how fit someone was.
More intriguing, a separate set of 102 proteins tended to predict people’s physical responses to exercise. Higher and lower levels of these molecules — few of which overlapped with the proteins related to people’s baseline fitness — prophesied the extent to which someone’s aerobic capacity would increase, if at all, with exercise...Finally, because aerobic fitness is so strongly linked to longevity, the scientists crosschecked levels of the various fitness-related proteins in the blood of people enrolled in a separate health study that included mortality records, and found that protein signatures implying lower or greater fitness response likewise signified shorter or longer lives.
Taken as a whole, the new study’s results suggest that “molecular profiling tools might help to tailor” exercise plans. Someone whose bloodstream protein signature suggests he or she might gain little fitness from a standard, moderate walking, cycling or swimming routine, for instance, might be nudged toward higher-intensity workouts or resistance training.

Tuesday, June 15, 2021

Equality and Equity in the Life Sciences

After yesterday's heavy post describing 'Four Americas.' - with the last listed being 'Justice America' - rising out of the age of the Millenials and addressing the systemic racism that has permeated American life and politics since the 1700's - I decided to take a brief rest by going back to my reviewing of the tables of contents of various life science journals, a respite, I thought, of looking at politically neutral basic science. No such luck...nothing is politically neutral in these times...virtually all of the journals I look at are attempting to examine and atone for their past inattention to issues of equality and equity. From the tables of contents of the first four journals in my queue of Journals to have a look at:

From Cell: Affirming NIH’s commitment to addressing structural racism in the biomedical research enterprise

NIH has acknowledged and committed to ending structural racism. The framework for NIH’s approach, summarized here, includes understanding barriers; developing robust health disparities/equity research; improving its internal culture; being transparent and accountable; and changing the extramural ecosystem so that diversity, equity, and inclusion are reflected in funded research and the biomedical workforce.
From Current Biology: Trends Voices: On inclusion and diversity
Meet with us to share personal stories of your experience as a scientist, as well as accounts of what you’re doing to redress existing bias in scientific inquiry, for the benefit of science and society. Book a meeting.
From Proceedings of the National Academy of Sciences:
News Feature: Keeping Black students in STEM
From Social Cognitive and Affective Neuroscience:
The neural underpinnings of making racial distinctions.

Monday, June 14, 2021

Four Americas

I want to recommend that MindBlog readers have a look at George Packer's essay in the Atlantic presenting a condensed version of arguments in his new book, Last Best Hope: America in Crisis and Renewal. Below, I pass on an even more condensed version in a few clips from the Atlantic article: 

1 - Free America

Call the first narrative “Free America.” In the past half century it’s been the most politically powerful of the four. Free America draws on libertarian ideas, which it installs in the high-powered engine of consumer capitalism. The freedom it champions is very different from Alexis de Tocqueville’s art of self-government. It’s personal freedom, without other people—the negative liberty of “Don’t tread on me.”...The conservative movement began to dominate the Republican Party in the 1970s, and then much of the country after 1980 with the presidency of Ronald Reagan.
A character in Jonathan Franzen’s 2010 novel, Freedom, puts it this way: “If you don’t have money, you cling to your freedoms all the more angrily. Even if smoking kills you, even if you can’t afford to feed your kids, even if your kids are getting shot down by maniacs with assault rifles. You may be poor, but the one thing nobody can take away from you is the freedom to fuck up your life.”
2 Smart America
The new knowledge economy created a new class of Americans: men and women with college degrees, skilled with symbols and numbers—salaried professionals in information technology, computer engineering, scientific research, design, management consulting, the upper civil service, financial analysis, law, journalism, the arts, higher education...they dominate the top 10 percent of American incomes, with outsize economic and cultural influence...After the 1970s, meritocracy..., a system intended to give each new generation an equal chance to rise, created a new hereditary class structure. Educated professionals pass on their money, connections, ambitions, and work ethic to their children, while less educated families fall further behind, with less and less chance of seeing their children move up...a lower-class child is nearly as unlikely to be admitted to one of the top three Ivy League universities as they would have been in 1954.
In the early 1970s, the party became the home of educated professionals, nonwhite voters, and the shrinking unionized working class. The more the party identified with the winners of the new economy, the easier it became for the Republican Party to pull away white workers by appealing to cultural values...these two classes, rising professionals and sinking workers, which a couple of generations ago were close in income and not so far apart in mores, no longer believe they belong to the same country. But they can’t escape each other, and their coexistence breeds condescension, resentment, and shame...Smart Americans are uneasy with patriotism. It’s an unpleasant relic of a more primitive time, like cigarette smoke or dog racing. It stirs emotions that can have ugly consequences. The winners in Smart America—connected by airplane, internet, and investments to the rest of the globe—have lost the capacity and the need for a national identity, which is why they can’t grasp its importance for others...abandoning patriotism to other narratives guarantees that the worst of them will claim it.
3 - Real America
Real America is a very old place. The idea that the authentic heart of democracy beats hardest in common people who work with their hands goes back to the 18th century. It was embryonic in the founding creed of equality. “State a moral case to a ploughman and a professor,” Thomas Jefferson wrote in 1787. “The former will decide it as well, and often better than the latter, because he has not been led astray by artificial rules.” Moral equality was the basis for political equality...The triumph of popular democracy brought an anti-intellectual bias to American politics that never entirely disappeared. Self-government didn’t require any special learning, just the native wisdom of the people...The overwhelmingly white crowds that lined up to hear Palin speak were nothing new. Real America has always been a country of white people...
From its beginnings, Real America has also been religious, and in a particular way: evangelical and fundamentalist, hostile to modern ideas and intellectual authority...Finally, Real America has a strong nationalist character. Its attitude toward the rest of the world is isolationist, hostile to humanitarianism and international engagement, but ready to respond aggressively to any incursion against national interests...Ever since the age of Reagan, the Republican Party has been a coalition of business interests and less affluent white people, many of them evangelical Christians. The persistence of the coalition required an immense amount of self-deception on both sides.
When Trump ran for president, the party of Free America collapsed into its own hollowness...Trump didn’t try to shape his people ideologically with new words and concepts. He used the low language of talk radio, reality TV, social media, and sports bars, and to his listeners this language seemed far more honest and grounded in common sense than the mincing obscurities of “politically correct” experts. His populism brought Jersey Shore to national politics. The goal of his speeches was not to whip up mass hysteria but to get rid of shame. He leveled everyone down together...More than anything, Trump was a demagogue—a thoroughly American type, familiar to us from novels like All the King’s Men and movies like Citizen Kane. “Trump is a creature native to our own style of government and therefore much more difficult to protect ourselves against,” the Yale political theorist Bryan Garsten wrote. “He is a demagogue, a popular leader who feeds on the hatred of elites that grows naturally in democratic soil.” A demagogue can become a tyrant, but the people put him there—the people who want to be fed fantasies and lies, the people who set themselves apart from and above their compatriots. So the question isn’t who Trump was, but who we are.
4 - Just America
In 2014, American character changed....A large and influential generation came of age in the shadow of accumulating failures by the ruling class—especially by business and foreign-policy elites...My generation told our children’s generation a story of slow but steady progress...If anyone doubted that the country was becoming a more perfect union, the election of a Black president who loved to use that phrase proved it...Of course the kids didn’t buy it. In their eyes “progress” looked like a thin upper layer of Black celebrities and professionals, who carried the weight of society’s expectations along with its prejudices, and below them, lousy schools, overflowing prisons, dying neighborhoods...Then came one video after another of police killing or hurting unarmed Black people. Then came the election of an openly racist president. These were conditions for a generational revolt.
Call this narrative “Just America.” It’s another rebellion from below. As Real America breaks down the ossified libertarianism of Free America, Just America assails the complacent meritocracy of Smart America. It does the hard, essential thing that the other three narratives avoid, that white Americans have avoided throughout history. It forces us to see the straight line that runs from slavery and segregation to the second-class life so many Black Americans live today—the betrayal of equality that has always been the country’s great moral shame, the heart of its social problems.
In the same way that libertarian ideas had been lying around for Americans to pick up in the stagflated 1970s, young people coming of age in the disillusioned 2000s were handed powerful ideas about social justice to explain their world. The ideas came from different intellectual traditions: the Frankfurt School in 1920s Germany, French postmodernist thinkers of the 1960s and ’70s, radical feminism, Black studies. They converged and recombined in American university classrooms, where two generations of students were taught to think as critical theorists.
Critical theory upends the universal values of the Enlightenment: objectivity, rationality, science, equality, freedom of the individual. These liberal values are an ideology by which one dominant group subjugates another. All relations are power relations, everything is political, and claims of reason and truth are social constructs that maintain those in power...But in identity politics, equality refers to groups, not individuals, and demands action to redress disparate outcomes among groups—in other words, equity, which often amounts to new forms of discrimination. In practice, identity politics inverts the old hierarchy of power into a new one: bottom rail on top. The fixed lens of power makes true equality, based on common humanity, impossible...By the turn of the millennium, these ideas were nearly ubiquitous in humanities and social-science departments. Embracing them had become an important credential for admittance into sectors of the professorate...In turn, these scholars formed the worldview of young Americans educated by elite universities to thrive in the meritocracy.
Millions of young Americans were steeped in the assumptions of critical theory and identity politics without knowing the concepts...Here is the revolutionary power of the narrative: What had been considered, broadly speaking, American history (or literature, philosophy, classics, even math) is explicitly defined as white, and therefore supremacist...The most radical version of the narrative lashes together the oppression of all groups in an encompassing hell of white supremacy, patriarchy, homophobia, transphobia, plutocracy, environmental destruction, and drones...There are too many things that Just America can’t talk about for the narrative to get at the hardest problems. It can’t talk about the complex causes of poverty. Structural racism—ongoing disadvantages that Black people suffer as a result of policies and institutions over the centuries—is real. But so is individual agency, and in the Just America narrative, it doesn’t exist. The narrative can’t talk about the main source of violence in Black neighborhoods, which is young Black men, not police.
...another way to understand Just America is in terms of class. Why does so much of its work take place in human-resources departments, reading lists, and awards ceremonies? In the summer of 2020, the protesters in the American streets were disproportionately Millennials with advanced degrees making more than $100,000 a year. Just America is a narrative of the young and well educated, which is why it continually misreads or ignores the Black and Latino working classes. The fate of this generation of young professionals has been cursed by economic stagnation and technological upheaval. The jobs their parents took for granted have become much harder to get, which makes the meritocratic rat race even more crushing. Law, medicine, academia, media—the most desirable professions—have all contracted. The result is a large population of overeducated, underemployed young people living in metropolitan areas...The historian Peter Turchin coined the phrase elite overproduction to describe this phenomenon. He found that a constant source of instability and violence in previous eras of history, such as the late Roman empire and the French Wars of Religion, was the frustration of social elites for whom there were not enough jobs. Turchin expects this country to undergo a similar breakdown in the coming decade.
All four of the narratives I’ve described emerged from America’s failure to sustain and enlarge the middle-class democracy of the postwar years. They all respond to real problems. Each offers a value that the others need and lacks ones that the others have. Free America celebrates the energy of the unencumbered individual. Smart America respects intelligence and welcomes change. Real America commits itself to a place and has a sense of limits. Just America demands a confrontation with what the others want to avoid.
In Free America, the winners are the makers, and the losers are the takers who want to drag the rest down in perpetual dependency on a smothering government. In Smart America, the winners are the credentialed meritocrats, and the losers are the poorly educated who want to resist inevitable progress. In Real America, the winners are the hardworking folk of the white Christian heartland, and the losers are treacherous elites and contaminating others who want to destroy the country. In Just America, the winners are the marginalized groups, and the losers are the dominant groups that want to go on dominating.
I don’t much want to live in the republic of any of them...I don’t think we are dying. We have no choice but to live together—we’re quarantined as fellow citizens. Knowing who we are lets us see what kinds of change are possible...Meanwhile, we remain trapped in two countries. Each one is split by two narratives—Smart and Just on one side, Free and Real on the other. Neither separation nor conquest is a tenable future. The tensions within each country will persist even as the cold civil war between them rages on.

Friday, June 11, 2021

Social Media isn't the problem...We are.

Mark Manson is one really smart guy. I have to pass on a precis of a nicely structured item pointed to in his weekly newsletter:

He begins his piece by recalling relatively recent moral panics over offensive music and violent video games:

Today, we chuckle at the hair metal bands of the late eighties as innocent fun while the shocking hip hop of the early nineties has evolved into a cornerstone of our modern culture. And after hundreds of studies across multiple decades, the American Psychological Association reports that they still haven’t found any evidence that playing video games motivates people to commit violence...Time has resolved our collective anxiety. The new has become the old, the shocking has become the expected. Yet, today we find ourselves in the grips of another moral panic—this time around social media.

1 - The New Culprit

Manson lists a number of books decrying the effects of social media and notes that: has hit that “sky is falling” pitch of hysteria quite like the recent Netflix film, The Social Dilemma. I would call it a documentary except that there is a conspicuous absence of any data or actual scientific evidence in it. Instead, we’re treated to fictionalized reenactments of the repeated warnings given by tech industry “experts,” all of whom simply repeat and reinforce one another’s opinions for 94 minutes...The tech author and social media defender, Nir Eyal, has told me that the entirety of his three-hour interview was left out of the film, as was all but about ten seconds of the interview with another skeptic of social media criticism, Jonathan Haidt.
The problem is the data...there’s been research on social media and its effects on people. Lots of it affects adults, how it affects children, how it influences politics and mood and self-esteem and general happiness...the results will probably surprise you. Social media is not the problem...We are.
2 - Three Common Criticisms of Social Media That Are Wrong
Criticism One: Social Media Harms Mental Health
...over the past two decades, we have seen a worrying increase in rates of suicide, depression, and anxiety, especially in young people. But it’s not clear that social media is the cause...A lot of scary research on social media usage is correlational research...lots of social media usage = lots of depressed teenagers...The problem with studies like this is that it’s a chicken-and-egg situation. Is it that social media causes kids to feel more depressed? Or is it that really depressed kids are more likely to use social media?..correlational studies kind of suck... So why do people do them?...because they’re easy.
It’s very easy to round up a few hundred kids, ask them how much they use social media, then ask them if they feel anxious or depressed, and create a spreadsheet. It’s much, much harder to round up thousands of kids, track them over the course of a decade and calculate how any shifts or changes in their social media usage actually affect their mental health over the years...Well, researchers with a lot of time and money have run those longitudinal studies, and the results are in...Manson lists a number of studies that are...leaning towards the conclusion that it’s anxiety and depression that drives us to use social media in all the horrible ways we use it—not the other way around.
Then, there’s the studies you never hear about. Like the one from 2012 that found posting status updates on Facebook reduces feelings of loneliness. Or the one from earlier this year that found activity on Twitter can potentially increase happiness. Or one that found that active social media use actually decreases symptoms of depression and anxiety...Those of us who were around in 2004 can remember why social media was such a big deal in the first place—it connected you to everybody in your life in a way that was simply impossible in the before-times. And those initial benefits of social media are so immediate and obvious that we’ve likely become inured to them and take them for granted.
Criticism Two: Social Media Causes Political Extremism or Radicalization
...three facts make it unlikely that social media is the culprit: 
-Studies show that political polarization has increased most among the older generations who use social media the least. Younger generations who are more active on social media tend to have more moderate views. 
-Polarization has been widening in the United States and many other countries since the 1970s, long before the advent of the internet. 
-Polarization has not occurred universally around the world. In fact, some countries are experiencing less polarization than in previous decades.
Criticism Three: Big Tech Companies Are Profiting Off the Mayhem media is not destroying society, and even if it was, Big Tech is not fanning the flames. They’re actually spending a lot of money trying to put it out...These companies have spent billions in efforts to fight back against disinformation and conspiracy theories. A recent study to see if Google’s algorithm promoted extremist alt-right content actually found the opposite: the YouTube algorithm seemed to go out of its way to promote mainstream, established news sites far more often than its fringe looney figures...Similarly, last year Facebook banned tens of thousands of conspiracy theorist and terrorist groups. This has been part of their ongoing campaign to clean up their platform. They’ve hired over 10,000 new employees in the past two years just to review content on the site for disinformation and violence.
3 - But Clearly Something’s Not Right… So What Is It?
Back in the 90s, conspiracy theories like like the Y2K computer armageddon were just as common as they are now. The difference was that they were far less harmful because the social networks that existed at the time cut them off aggressively at the source...But today someone goes online, finds a web forum, or a Facebook group or a Clubhouse room, and all the little Y2Kers get together and spend all of their time socializing and validating each other based on the shared assumption that the world is about to end...Facebook didn’t create the crazy Y2Kers. It merely gives them an opportunity to find each other and connect—because, for better or worse, Facebook gives everybody the opportunity to find each other and connect...This asymmetry in beliefs is important, as the more extreme and negative the belief, the more motivated the person is to share it with others. And when you build massive platforms based on sharing… well, things get ugly.
4 - The 90/9/1 Rule
The Pareto Principle or the 80/20 Rule states that 80% of results come from 20% of the processes. I.e., 80% of a company’s revenue will often come from 20% of its customers; 80% of your social life is probably spent with 20% of your friends; 80% of traffic accidents are caused by 20% of the drivers; 80% of the crime is committed by 20% of the people. Etc.
People who have studied social networks and online communities have found a similar rule to describe information shared on the internet, dubbed the “90/9/1 Rule.” It states that in any social network or online community, 1% of the users generate 90% of the content, 9% of the users create 10% of the content, and the other 90% of people are mostly silent observers...Let’s call the 1% who create 90% of the content creators. We’ll call the 9% the engagers—as most of their content is a reaction to what the 1% is creating—and the 90% who are merely observers, we’ll refer to as lurkers.
The...dynamic of social networks comes to reflect Bertrand Russell’s old lament: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.”...The creators are largely the fools and fanatics who are so certain of themselves...It’s not necessarily the platform’s algorithms that favor these fanatics—it’s that human psychology favors these fools and fanatics and the algorithms simply reflect our psychology back to us...Issues that are important to small but loud minorities dictate the discussion of the majority...Because radical and unconventional views exert a disproportionate influence online, they are mistakenly seen as common and conventional...
People develop extreme and irrational levels of pessimism. Because creators online tend to be the doomsayers and extremists, the overall perception of the state of the world skews increasingly negative. Polling data shows optimism in much of the developed world to be at all-time lows despite the fact that by almost every statistical measurement—wealth, longevity, peace, education, equality, technology, etc.—we live in the best time in human history, and it’s not even close...Much of this can be summed up in the simple phrase: social media does not accurately reflect the underlying society.
5 - Optimizing for Controversy vs Consensus
A few generations ago, there were only a few television channels, a few radio stations, and a few international news services...if you were in charge of one of these few channels of information, it was in your interest to produce content that appealed to as many people as possible...what we saw was a traditional media in the 20th century that largely sought to produce content focused on consensus.
But with the internet, the supply of information exploded. Suddenly everyone had 500 TV channels and dozens of radio stations and an infinite number of websites to choose from...Therefore, the most profitable strategy in the media and entertainment stopped being consensus and instead became controversy...This optimization for controversy trickled down, all the way from politicians and major news outlets, to individual influencers on social media...The result is this fun-house-mirror version of reality, where you go online (or turn on cable news) and feel like the world is constantly falling apart around you but it’s not.
And the fun-house-mirror version of reality isn’t caused by social media, it’s caused by the profit incentives on media/entertainment in an environment where there’s way more supply of content than there is demand. Where there’s far more supply of news and information than there is time to consume it. Where there’s a natural human proclivity to pay more attention to a single car crash than the hundreds of people successfully driving down the highway, going on their merry way.
6 - The Silent Majority
What we get is a cultural environment that looks like this:
Social media has not changed our culture. It’s shifted our awareness of culture to the extremes of all spectrums. And until we all recognize this, it will be impossible to have serious conversations about what to do or how to move forward.
You can affect the culture simply by shifting people’s awareness about certain subjects. The hysteria in the media over their irrelevance has pushed our culture to a place where we overestimate social media and underestimate our own psychology.
Instead, we must push our perception back to a more realistic and mature understanding of social media and social networks. To do this, it’s important each of us individually understands concepts such as The Attention Diet and the Attention Economy, that we learn to cut out the most news consumption, and, you know, maybe spend some more time outside.
Moral panics find a scapegoat to blame for what we hate to admit about ourselves. My parents and their friends didn’t ask why kids were drawn to such aggressive and vulgar music. They were afraid of what it might have revealed about themselves. Instead, they simply blamed the musicians and the video games.
Similarly, rather than owning up to the fact that these online movements are part of who we are—that these are the ugly underbellies of our society that have existed and persisted for generations—we instead blame the social media platforms for accurately reflecting ourselves back to us.
The great biographer Robert Caro once said, “Power doesn’t always corrupt, but power always reveals.” Perhaps the same is true of the most powerful networks in human history.
Social media has not corrupted us, it’s merely revealed who we always were.

Thursday, June 10, 2021

Storytelling increases oxytocin and positive emotions, decreases cortisol and pain, in hospitalized kids

From Brockington et al.:
Storytelling is a distinctive human characteristic that may have played a fundamental role in humans’ ability to bond and navigate challenging social settings throughout our evolution. However, the potential impact of storytelling on regulating physiological and psychological functions has received little attention. We investigated whether listening to narratives from a storyteller can provide beneficial effects for children admitted to intensive care units. Biomarkers (oxytocin and cortisol), pain scores, and psycholinguistic associations were collected immediately before and after storytelling and an active control intervention (solving riddles that also involved social interaction but lacked the immersive narrative aspect). Compared with the control group, children in the storytelling group showed a marked increase in oxytocin combined with a decrease in cortisol in saliva after the 30-min intervention. They also reported less pain and used more positive lexical markers when describing their time in hospital. Our findings provide a psychophysiological basis for the short-term benefits of storytelling and suggest that a simple and inexpensive intervention may help alleviate the physical and psychological pain of hospitalized children on the day of the intervention.

Wednesday, June 09, 2021

Cultural Evolution of Genetic Heritability

Behavioral and Brain Sciences has accepted an article from Uchiyama et al., whose abstract I copy below, and invites the submission of commentary proposals.
Behavioral genetics and cultural evolution have both revolutionized our understanding of human behavior—largely independent of each other. Here we reconcile these two fields under a dual inheritance framework, offering a more nuanced understanding of the interaction between genes and culture. Going beyond typical analyses of gene- environment interactions, we describe the cultural dynamics that shape these interactions by shaping the environment and population structure. A cultural evolutionary approach can explain, for example, how factors such as rates of innovation and diffusion, density of cultural sub-groups, and tolerance for behavioral diversity impact heritability estimates, thus yielding predictions for different social contexts. Moreover, when cumulative culture functionally overlaps with genes, genetic effects become masked, unmasked, or even reversed, and the causal effects of an identified gene become confounded with features of the cultural environment. The manner of confounding is specific to a particular society at a particular time, but a WEIRD (Western, educated, industrialized, rich, democratic) sampling problem obscures this boundedness. Cultural evolutionary dynamics are typically missing from models of gene-to-phenotype causality, hindering generalizability of genetic effects across societies and across time. We lay out a reconciled framework and use it to predict the ways in which heritability should differ between societies, between socioeconomic levels and other groupings within some societies but not others, and over the life course. An integrated cultural evolutionary behavioral genetic approach cuts through the nature-nurture debate and helps resolve controversies in topics such as IQ.

Tuesday, June 08, 2021

We’ve been great at extending our lives, but not at ending them.

Steven Pinker does a review of Steven Johnson's recent book "Extra Life - A Short History of Living Longer," whose subject matter overlaps considerably with Chapters 5 ("Life") and 6 ("Health") of Pinker's book "Enlightenment Now" that MindBlog abstracted in a series of posts March 1-12, 2018.
Starting in the second half of the 19th century, the average life span began to climb rapidly, giving humans not just extra life, but an extra life. In rich countries, life expectancy at birth hit 40 by 1880, 50 by 1900, 60 by 1930, 70 by 1960, and 80 by 2010. The rest of the world is catching up. Global life expectancy in 2019 was 72.6 years, higher than that of any country, rich or poor, in 1950...Of the eight innovations that have saved the most lives, as Johnson sees it, six are defenses against infectious disease.
The sin of ingratitude is said to condemn one to the ninth circle of hell, and that’s where we may be headed for our attitudes toward the granters of extra life. In the list of inventions that saved lives by the hundreds of millions, we find antibiotics (squandered to fatten chickens in factory farms), blood transfusions (condemned as sinful by the devout), and chlorination and pasteurization (often mistrusted as unnatural). Among those that saved lives by the billions, we find the lowly toilet and sewer (metaphors for the contemptible), artificial fertilizer (a devil for Whole Foods shoppers) and vaccines (perhaps the greatest invention in history, and the target of head-smackingly stupid resistance).
Johnson shakes us out of our damnable ingratitude and explains features of modernity that are reviled by sectors of the right and left: government regulation, processed food, high-tech farming, big data and bureaucracies like the Food and Drug Administration, the Centers for Disease Control and Prevention and the World Health Organization. He is open about their shortcomings and dangers. But much depends on whether we see them as evils that must be abolished or as lifesavers with flaws that must be mitigated.
A New Yorker essay by Brooke Jarvis also reviews Johnson's book and recaps the story of extending our lives, but moves on to consider another goal that we have been much less clear on - attaining a good death - by reviewing Katie Engelhart’s “The Inevitable: Dispatches on the Right to Die," which describes the right-to-die underground, a world of people who ask why a medical system so good at extending their lives will do little to help them end those lives in a peaceful and painless way. She gives the stories of people who have managed their exit through using the manual "The Peaceful Pill Handbook."
In the United States, physician-assisted suicide is permitted in a slowly growing number of states, but only to ease the deaths of patients who fit a narrow set of legal criteria. Generally, they must have received a terminal diagnosis with a prognosis of six months or less; be physically able to administer the drugs to themselves; have been approved by doctors as mentally competent to make the decision; and have made a formal request more than once, including after a waiting period.
Doctors who specialize in aid in dying often distinguish between “despair suicides,” the most familiar version, and “rational suicides,” those sought by people who have, in theory, weighed a terminal or painful or debilitating diagnosis and made a measured, almost mathematical choice about how best to deal with it. In practice, though, Engelhart finds that it’s hard to isolate pure rationality; many emotional factors always seem to tilt the scales. People worry about their lives having a sense of narrative integrity and completion. They worry about autonomy, and about “dignity” (this is another word that comes up a lot, and when Engelhart digs in she finds that many people define it quite specifically: control over one’s own defecation and mess). They worry about what other people will think of them. They worry about who will take care of them when they can no longer take care of themselves.
Behind every fraught ethical debate about physician-assisted suicide stands this inescapable reality: there are many people for whom the way we do things is not working. The right to die can’t be extricated from a right to care. One of the doctors Engelhart interviews—an oncologist in Belgium, where euthanasia laws are widely supported, and aid in dying is legal even for psychiatric patients who request it and qualify—tells her that America is not ready for such laws. “It’s a developing country,” he says. “You shouldn’t try to implement a law of euthanasia in countries where there is no basic healthcare.”

Monday, June 07, 2021

Making the hard problem of consciousness easier

Yet another morsel of text for consciousness mavens. Melloni et al. (open source) describe efforts to narrow down on which of several current theories better explains conscious experience by using the approach of adversarial collaboration...
...adversarial collaboration rests on identifying the most diagnostic points of divergence between competing theories, reaching agreement on precisely what they predict, and then designing experiments that directly test those diverging predictions. During the past 2 years, several groups have adopted this approach...The global neuronal workspace theory (GNWT)  claims that consciousness is instantiated by the global broadcasting and amplification of information across an interconnected network of prefrontal-parietal areas and many high-level sensory cortical areas...Conversely, the integrated information theory (IIT)  holds that consciousness should be understood in terms of cause-effect “power” that reflects the amount of maximally irreducible integrated information generated by certain neuronal architectures. On the basis of mathematical and neuroanatomical considerations, the IIT holds that the posterior cortex is ideally situated for generating a maximum of integrated information ...experiments designed by neuroscientists and philosophers not directly associated with the theories are being conducted in six independent laboratories.
I pass on their summary graphic (click to enlarge):

Friday, June 04, 2021

Inequality is a law of nature

DeDeo and Hobson do a commentary on a model developed by Kawakatsu et al. (open source) that explains the emergence of hierarchy in networked endorsement dynamics. I pass on a few clips from both, and after that list titles with links to a number of previous MindBlog posts that have presented explanations of why inequality and hierarchy are features of all natural systems. First, from DeDeo and Hobson:
As an old Scottish proverb says, “give a Dog an ill Name, and he’ll soon be hanged.” Even when the signal has little to do with underlying reality, endorsement—or contempt—can produce lasting consequences for a person’s social position. The ease with which such pieces of folk wisdom translate across both time and species suggests that there is a general, and even perhaps universal, logic to hierarchies and how they form. Kawakatsu et al. make an important advance in the quest for this kind of understanding, providing a general model for how subtle differences in individual-level decision-making can lead to hard-to-miss consequences for society as a whole...Their work reveals two distinct regimes—one egalitarian, one hierarchical—that emerge from shifts in individual-level judgment. These lead to statistical methods that researchers can use to reverse engineer observed hierarchies, and understand how signaling systems work when prestige and power are in play. The results make a singular contribution at the intersection of two distinct traditions of research into social power: the mechanistic (how hierarchies get made) and the functional (the adaptive roles they can play in society).
Kawakatsu et al.'s abstract:
Many social and biological systems are characterized by enduring hierarchies, including those organized around prestige in academia, dominance in animal groups, and desirability in online dating. Despite their ubiquity, the general mechanisms that explain the creation and endurance of such hierarchies are not well understood. We introduce a generative model for the dynamics of hierarchies using time-varying networks, in which new links are formed based on the preferences of nodes in the current network and old links are forgotten over time. The model produces a range of hierarchical structures, ranging from egalitarianism to bistable hierarchies, and we derive critical points that separate these regimes in the limit of long system memory. Importantly, our model supports statistical inference, allowing for a principled comparison of generative mechanisms using data. We apply the model to study hierarchical structures in empirical data on hiring patterns among mathematicians, dominance relations among parakeets, and friendships among members of a fraternity, observing several persistent patterns as well as interpretable differences in the generative mechanisms favored by each. Our work contributes to the growing literature on statistically grounded models of time-varying networks.
And, I list a few relevant past MindBlog posts:
Wealth inequality as a law of nature.
The science of inequality.
The Pareto Principle - unfairness is a law.
Simple mechanisms can generate wealth inequality.
A choice mind-set perpetuates acceptance of wealth inequality.

Thursday, June 03, 2021

Optogenetics used to induce pair bonding and restore vision.

I want to note two striking technical advances that make use of the light activated protein rhodopsin that I spent 36 years of my laboratory life studying. Using genetic techniques, a version of this protein found in algae, called channelrhodopsin, can be inserted into nerve cells so that they become activated by light. Hughes does a lucid explanation of a technical tour de force in bioengineering reported by Yang et al. They used transgenic mice in which light sensitive dopaminergic (DA) neurons in the ventral tegmental area (VTA) brain region (involved in processing reward and promoting social behavior) can be activated by blue light pulses from a tiny LED device implanted under the skull. It is known that some VTA areas fire in synchrony when two mice (or humans) are cooperating or bonding. When two male mice were dropped into a cage, they exhibited mild animus towards each other, but when both were zapped with blue light at the same high frequency they clung to and started grooming each other! (Aside from being forbidden and impractical in humans, how about this means of getting someone to like you!...all you would have to do is control the transmitters controlling VTA DA neuron activity in yourself and your intended.) 

A second striking use of optogenetics is reported in Zimmer's summary of work of Sahel et al., who have partially restored sight in one eye of a blind man with retinitis pigmentosa, a hereditary disease that destroys light sensitive photoreceptor cells in the retina but spares the ganglion cell layer whose axons normally send visual information to the brain. Here is the Sahel et. al. abstract:

Optogenetics may enable mutation-independent, circuit-specific restoration of neuronal function in neurological diseases. Retinitis pigmentosa is a neurodegenerative eye disease where loss of photoreceptors can lead to complete blindness. In a blind patient, we combined intraocular injection of an adeno-associated viral vector encoding ChrimsonR with light stimulation via engineered goggles. The goggles detect local changes in light intensity and project corresponding light pulses onto the retina in real time to activate optogenetically transduced retinal ganglion cells. The patient perceived, located, counted and touched different objects using the vector-treated eye alone while wearing the goggles. During visual perception, multichannel electroencephalographic recordings revealed object-related activity above the visual cortex. The patient could not visually detect any objects before injection with or without the goggles or after injection without the goggles. This is the first reported case of partial functional recovery in a neurodegenerative disease after optogenetic therapy.

Wednesday, June 02, 2021

I’m Not Scared to Reenter Society. I’m Just Not Sure I Want To.

I have to pass on a few clips from Tim Kreider's piece in the Atlantic:
...after a year in isolation, I, at least, have gotten acclimated to a different existence—quieter, calmer, and almost entirely devoid of bullshit. If you’d told me in March 2020 that quarantine would last more than a year, I would have been appalled; I can’t imagine how I would’ve reacted if you’d told me, once it ended, I would miss it.
Quarantine has given us all time and solitude to think—a risk for any individual, and a threat to any status quo. People have gotten to have the experience—some of them for the first time in their life—of being left alone...Relieved of the... world’s battering demands and expectations, people’s personalities have started to assume their true shape. And a lot of them don’t want to return to wasting their days in purgatorial commutes, to the fluorescent lights and dress codes and middle-school politics of the office. Service personnel are apparently ungrateful for the opportunity to get paid not enough to live on by employers who have demonstrated they don’t care whether their workers live or die. More and more people have noticed that some of the basic American axioms—that hard work is a virtue, productivity is an end in itself—are horseshit. I’m remembering those science-fiction stories in which someone accidentally sees behind the façade of their blissful false reality to the grim dystopia they actually inhabit.
Maybe this period of seeming dormancy, of hibernation, has actually been a phase of metamorphosis. Though, before caterpillars become butterflies, they first digest themselves, dissolving into an undifferentiated mush called “the pupal soup.” People are at different stages of this transformation—some still unformed, some already opulently emergent. Some of us may wither on exposure to the air. Escape from the chrysalis is always a struggle. Me, I am still deep in the mush phase, still watching TV on the couch, trying to finish just this one essay, awaiting, with vague faith in the forces that shape us, whatever imago is assembling within.

Tuesday, June 01, 2021

Watching brain regions that help us anticipate what's going to happen next.

A primary function of the brain is to adaptively use past experience to generate expectations aboutevents that are likely to occur in the future. Lee et al. have used a machine learning model to analyze fMRI measurents made on 30 individuals as they watched repeated viewing of a movie, and a nice summary of this work is presented by the PNAS Journal Club. Areas in the frontal cortex anticipate (possibly foreseeing movie plot changes) up to 15 seconds in advance, while back of the brain cortical areas only anticipate about 1 second ahead. Frontal regions of the brain can keep track of tens of seconds, compared to only a few seconds at the back of the brain. 

Vertical slices of the brain, imaged at different locations, reveal a timescale gradient for anticipation. Timescales are short at the back of the brain (cool blues) and longer at the front (warm reds).

These results demonstrate a hierarchy of anticipatory signals in the human brain and link them to subjective experiences of events...This hierarchical view of the brain is very different from the traditional, modular view,...In the traditional view, there are systems that process raw sensory inputs, such as sights or sounds, and there are separate systems that call up memories or make plans. The latest findings blur the lines between those systems, showing that regions known for processing simple pieces of visual information, such as the visual cortex, can also anticipate what’s coming up soon, even if just by a few seconds.