Wednesday, September 18, 2019

The power of "cute"

I want to point MindBlog readers to an article by Simon May at aeon.co that encapsulates the contents of his new book "The Power of Cute" (2019), and to a Trends in Cognitive Sciences review article by Kringelbach et al. on cuteness that summarizes work on brain activities underlying survival related cuteness responses. The latter article's introduction notes that the prevailing view of cuteness...
...came from the founding fathers of ethology, Nobel prizewinners Konrad Lorenz and Niko Tinbergen. They proposed that the cute facial features of infants form a ‘Kindchenschema’ (infant schema), a prime example of an ‘innate releasing mechanism’ that unlocks instinctual behaviours...These characteristics contribute to ‘cuteness’ and propel our caregiving behaviours, which is vital because infants need our constant attention to survive and thrive. Infants attract us through all our senses, which helps make cuteness one of the most basic and powerful forces shaping our behaviour.
May considers the increasing popularity of child-like figures in popular culture and asks:
In such uncertain and uneasy times, and with so much injustice, hate and intolerance threatening the world, don’t we have more serious things to focus on than the escapades of that feline girl-figure Hello Kitty? Or Pokémon, the video-game franchise that’s hot again in 2019...The craze for all things cute is motivated, most obviously, by the urge to escape from precisely such a threatening world into a garden of innocence in which childlike qualities arouse deliciously protective feelings, and bestow contentment and solace. Cute cues include behaviours that appear helpless, harmless, charming and yielding, and anatomical features such as outsize heads, protruding foreheads, saucer-like eyes, retreating chins and clumsy gaits.
May suggests that the increasingly popularity of cuteness derives not only from the 'sweet' end of the whole spectrum of cuteness but also from moving towards the 'uncanny' and ambiguous end, a....
...faintly menacing subversion of boundaries – between the fragile and the resilient, the reassuring and the unsettling, the innocent and the knowing – when presented in cute’s frivolous, teasing idiom, is central to its immense popularity... ‘unpindownability’, as we might call it, that pervades cute – the erosion of borders between what used to be seen as distinct or discontinuous realms, such as childhood and adulthood – is also reflected in the blurred gender of many cute objects such as Balloon Dog or a lot of Pokémon. It is reflected, too, in their frequent blending of human and nonhuman forms, as in the cat-girl Hello Kitty. And in their often undefinable age...In such ways, cute is attuned to an era that is no longer so wedded to such hallowed dichotomies as masculine and feminine, sexual and nonsexual, adult and child, being and becoming, transient and eternal, body and soul, absolute and contingent, and even good and bad.
Although attraction to such cute objects as the mouthless, fingerless Hello Kitty can express a desire for power, cuteness can also parody and subvert power by playing with the viewer’s sense of her own power, now painting her into a dominant pose, now sowing uncertainty about who is really in charge...

Monday, September 16, 2019

Psychological adaptation to the apocalypse - meditate, or just be happy?

In this post, not exactly an upper, I point first to two in-your-face articles on how we ought to be afraid, very afraid, about humanity's future technological and ecological environment, and then note two pieces of writing on psychological adaptations that might dampen down the full turn on of our brains' fear machinery.

Novelist Jonathan Franzen does a screed very effective at scaring the bejesus out of us. His basic argument: “The climate apocalypse is coming. To prepare for it, we need to admit that we can’t prevent it.” A chorus of criticism has greeted Franzen's article: "Franzen is wrong on the science, on the politics, and on the psychology of human behavior as it pertains to climate change." (See also Chrobak.)

And, for alarm on our looming digital environment, The 6,000 word essay by Glenn S. Gerstell, general counsel of the National Security, and summarized by Warzel, should do the job. The first nation to crack quantum computing (China or the US) will rule the world! 

So, how do we manage to wake up cheerful in the morning? Futurist Yuval Harari offers his approach in Chapter 21 of his book "21 Lessons for the 21st century," by describing his experience of learning to meditate, starting with the initial instructions (to observe your process of breathing) in his first Vipassana meditation course. He now meditates two hours every day.
The point is that meditation is a tool for observing the mind directly...For at least two hours a day I actually observe reality as it is, while for the other twenty-two hours I get overwhelmed by emails and tweets and cute-puppy videos. Without the focus and clarity provided by this practice, I could not have written Sapiens or Homo Deus.
A glimmer of hopefulness can also be obtained by reading books in the vein of Pinker's "Enlightenment Now", which documents again and again, for many areas, how dire predictions about the future have not come to pass. The injunction here would be to be optimistic, not a bad idea, given the recent PNAS article by Lee et al. documenting that the lifespan of optimistic people, on average, is 11 to 15% longer.

Friday, September 13, 2019

Twitter is making us dumber.

Stanley-Becker points to some research providing hardly surprising evidence that communicating about complex issues using 280 character chunks of text dumbs down the understanding of twitter users. Using Twitter to teach literature has an overall negative effect on students’ average achievement, with the effect being strongest on students who usually perform better. Numerous schools have started to utilize twitter discussion among students assuming that this would enhance intellectual attainment, but in fact it undermines it.

Wednesday, September 11, 2019

Can we reverse our biological age? The usual media hysteria...

I must have seen at least 10 of my media inputs hyping a small study by Fahy et al. (9 white men, and lacking controls) pointed to by Abbott suggesting that the body's epigenetic clock might be reversed. The study actually had the goal of seeing whether human growth hormone could stimulates regeneration of the thymus gland and enhance immune function. Because the hormone can promote diabetes, the trial included two widely used anti-diabetic drugs, dehydroepiandrosterone (DHEA) and metformin, in the treatment cocktail. (Metformin is being evaluated as an anti-aging drug in several large scale studies).
Checking the effect of the drugs on the participants’ epigenetic clocks was an afterthought. The clinical study had finished when Fahy approached Horvath to conduct an analysis. (Epigenetic clocks are constructed by selecting sets of DNA-methylation sites across the genome. In the past few years, Horvath — a pioneer in epigenetic-clock research — has developed some of the most accurate ones)...Horvath used four different epigenetic clocks to assess each patient’s biological age, and he found significant reversal for each trial participant in all of the tests. “This told me that the biological effect of the treatment was robust,” he says. What’s more, the effect persisted in the six participants who provided a final blood sample six months after stopping the trial, he says.
The understandable excitement over this result is probably out of proportion to the probability it will be confirmed in larger experiments with proper controls.

Monday, September 09, 2019

Training to reduce cognitive biases.

Sellier et al. show that students assigned to solve a business case exercise are less likely to choose an inferior confirmatory solution when they have previously undergone a debiasing-training intervention:
The primary objection to debiasing-training interventions is a lack of evidence that they improve decision making in field settings, where reminders of bias are absent. We gave graduate students in three professional programs (N = 290) a one-shot training intervention that reduces confirmation bias in laboratory experiments. Natural variance in the training schedule assigned participants to receive training before or after solving an unannounced business case modeled on the decision to launch the Space Shuttle Challenger. We used case solutions to surreptitiously measure participants’ susceptibility to confirmation bias. Trained participants were 29% less likely to choose the inferior hypothesis-confirming solution than untrained participants. Analysis of case write-ups suggests that a reduction in confirmatory hypothesis testing accounts for their improved decision making in the case. The results provide promising evidence that debiasing-training effects transfer to field settings and can improve decision making in professional and private life.

Friday, September 06, 2019

How personal and professional conduct relate to one another.

From Griffin et al.:

Significance
The relative importance of personal traits compared with context for predicting behavior is a long-standing issue in psychology. This debate plays out in a practical way every time an employer, voter, or other decision maker has to infer expected professional conduct based on observed personal behavior. Despite its theoretical and practical importance, there is little academic consensus on this question. We fill this void with evidence connecting personal infidelity to professional behavior in 4 different settings.
Abstract
We study the connection between personal and professional behavior by introducing usage of a marital infidelity website as a measure of personal conduct. Police officers and financial advisors who use the infidelity website are significantly more likely to engage in professional misconduct. Results are similar for US Securities and Exchange Commission (SEC) defendants accused of white-collar crimes, and companies with chief executive officers (CEOs) or chief financial officers (CFOs) who use the website are more than twice as likely to engage in corporate misconduct. The relation is not explained by a wide range of regional, firm, executive, and cultural variables. These findings suggest that personal and workplace behavior are closely related.

Wednesday, September 04, 2019

Training wisdom - the Illeist (third person) method.

I think my most sane moments are those when I experience myself as watching, in third-person mode, rather than “being” Deric, the immersed actor. Science journalist David Robson does an essay on this perspective in Aeon, “Why speaking to yourself in the third person makes you wiser,” noting that this ancient rhetorical method, used by Julius Caesar and termed ‘illeism’ in 1809 by the poet Coleridge (latin ille meaning ‘he, that’) can clear the emotional fog of simple rumination, shifting perspective to see past biases. Robson notes the work of Igor Grossmann at the University of Waterloo in Canada, whose aim is:
...to build a strong experimental footing for the study of wisdom, which had long been considered too nebulous for scientific enquiry. In one of his earlier experiments, he established that it’s possible to measure wise reasoning and that, as with IQ, people’s scores matter. He did this by asking participants to discuss out-loud a personal or political dilemma, which he then scored on various elements of thinking long-considered crucial to wisdom, including: intellectual humility; taking the perspective of others; recognising uncertainty; and having the capacity to search for a compromise. Grossmann found that these wise-reasoning scores were far better than intelligence tests at predicting emotional wellbeing, and relationship satisfaction – supporting the idea that wisdom, as defined by these qualities, constitutes a unique construct that determines how we navigate life challenges.
The abstract from Grossmann et al.:
We tested the utility of illeism – a practice of referring to oneself in the third person – for the trainability of wisdom-related characteristics in everyday life: i) wise reasoning (intellectual humility, open-mindedness in ways a situation may unfold, perspective-taking, attempts to integrate different viewpoints) and ii) accuracy in emotional forecasts toward close others. In a month-long field experiment, people adopted either the third-person training or first-person control perspective when describing their most significant daily experiences. Assessment of spontaneous wise reasoning before and after the intervention revealed substantial growth in the training (vs. control) condition. At the end of the intervention, people forecasted their feelings toward a close other in challenging situations. A month later, these forecasted feelings were compared against their experienced feelings. Participants in the training (vs. control) condition showed greater alignment of forecasts and experiences, largely due to changes in their emotional experiences. The present research demonstrates a path to evidence-based training of wisdom-related processes via the practice of illeism.
Robson finds this work particularly fascinating,
...considering the fact that illeism is often considered to be infantile. Just think of Elmo in the children’s TV show Sesame Street, or the intensely irritating Jimmy in the sitcom Seinfeld – hardly models of sophisticated thinking. Alternatively, it can be taken to be the sign of a narcissistic personality – the very opposite of personal wisdom. After all, Coleridge believed that it was a ruse to cover up one’s own egotism: just think of the US president’s critics who point out that Donald Trump often refers to himself in the third person. Clearly, politicians might use illeism for purely rhetorical purposes but, when applied to genuine reflection, it appears to be a powerful tool for wiser reasoning.
For an example of third person usage reflecting not wisdom, but a narcissistic personality, look no further than our current president, Donald Trump, as noted in this Washington Post piece by Rieger.

Monday, September 02, 2019

Infants expect leaders to right wrongs

From Stavans and Baillargeon:
Anthropological and psychological research on direct third-party punishment suggests that adults expect the leaders of social groups to intervene in within-group transgressions. Here, we explored the developmental roots of this expectation. In violation-of-expectation experiments, we asked whether 17-mo-old infants (n = 120) would expect a leader to intervene when observing a within-group fairness transgression but would hold no particular expectation for intervention when a nonleader observed the same transgression. Infants watched a group of 3 bear puppets who served as the protagonist, wrongdoer, and victim. The protagonist brought in 2 toys for the other bears to share, but the wrongdoer seized both toys, leaving none for the victim. The protagonist then either took 1 toy away from the wrongdoer and gave it to the victim (intervention event) or approached each bear in turn without redistributing a toy (nonintervention event). Across conditions, the protagonist was either a leader (leader condition) or a nonleader equal in rank to the other bears (nonleader condition); across experiments, leadership was marked by either behavioral or physical cues. In both experiments, infants in the leader condition looked significantly longer if shown the nonintervention as opposed to the intervention event, suggesting that they expected the leader to intervene and rectify the wrongdoer’s transgression. In contrast, infants in the nonleader condition looked equally at the events, suggesting that they held no particular expectation for intervention from the nonleader. By the second year of life, infants thus already ascribe unique responsibilities to leaders, including that of righting wrongs.