Wednesday, February 27, 2019

New form of neural communication at a distance.

The branch of parapsychology that claims subtle sensing and psychological action at a distance might be heartened that they are finding a shred of supportive data in recent bizarre findings suggesting a new form of neural communication. It turns out that two pieces of brain tissue in close proximity can still communicate with each other even though they have been completely severed, leaving a gap between them. From Dickson's summary of work by Chiang et al.:
Chiang et al. show that slow periodic activity in a horizontal hippocampal slice preparation occurs through dendritic NMDA receptor‐dependent Ca2+ spiking, which is itself self‐generating and self‐propagating, via ephaptic interactions across neurons. Consistent with purely ephaptic transmission, this activity and its active propagation across the slice were resistant to pharmacological blockers of fast ionotropic chemical neurotransmission, as well as pharmacological blockade of electrical transmission via gap junctions. What is particularly compelling is that the activity could be not only modulated, but also eliminated or even regenerated by imposed electrical fields. Most shockingly, this activity could be transmitted from one side of a surgically severed slice to the other when the two cut edges were simply placed in close proximity. These surprising findings were further supported by a computer model of hippocampal circuitry.
Here are the key points listed in the Chiang et al. (open source) article:
Slow periodic activity can propagate with speeds around 0.1 m s−1 and be modulated by weak electric fields.
Slow periodic activity in the longitudinal hippocampal slice can propagate without chemical synaptic transmission or gap junctions, but can generate electric fields which in turn activate neighbouring cells.
Applying local extracellular electric fields with amplitude in the range of endogenous fields is sufficient to modulate or block the propagation of this activity both in the in silico and in the in vitro models.
Results support the hypothesis that endogenous electric fields, previously thought to be too small to trigger neural activity, play a significant role in the self‐propagation of slow periodic activity in the hippocampus.
Experiments indicate that a neural network can give rise to sustained self‐propagating waves by ephaptic coupling, suggesting a novel propagation mechanism for neural activity under normal physiological conditions.

Tuesday, February 26, 2019

The Neuroscience of ‘Rock-a-Bye Baby’

I've always wondered why I sleep like a baby when on a boat being slowly rocked by waves, so was intrigued by Friedman's recent piece pointing to work by Perrault et al. showing that a slow rocking motion not only improves sleep but also can help people consolidate memories overnight. This is because continuous rocking stimulation strengthens deep sleep via the neural entrainment of intrinsic sleep oscillations. The Perrault et al. summary:

Highlights
•Rocking boosts deep sleep, sleep maintenance, and memory in healthy sleepers
•Fast spindles increase during rocking and synchronize with the slow oscillation up-state
• Rocking-induced overnight memory improvement relates to increased sigma activity
• Continuous rocking stimulation actively entrains intrinsic sleep oscillations
Summary
Sensory processing continues during sleep and can influence brain oscillations. We previously showed that a gentle rocking stimulation (0.25 Hz), during an afternoon nap, facilitates wake-sleep transition and boosts endogenous brain oscillations (i.e., EEG spindles and slow oscillations [SOs]). Here, we tested the hypothesis that the rhythmic rocking stimulation synchronizes sleep oscillations, a neurophysiological mechanism referred to as “neural entrainment.” We analyzed EEG brain responses related to the stimulation recorded from 18 participants while they had a full night of sleep on a rocking bed. Moreover, because sleep oscillations are considered of critical relevance for memory processes, we also investigated whether rocking influences overnight declarative memory consolidation. We first show that, compared to a stationary night, continuous rocking shortened the latency to non-REM (NREM) sleep and strengthened sleep maintenance, as indexed by increased NREM stage 3 (N3) duration and fewer arousals. These beneficial effects were paralleled by an increase in SOs and in slow and fast spindles during N3, without affecting the physiological SO-spindle phase coupling. We then confirm that, during the rocking night, overnight memory consolidation was enhanced and also correlated with the increase in fast spindles, whose co-occurrence with the SO up-state is considered to foster cortical synaptic plasticity. Finally, supporting the hypothesis that a rhythmic stimulation entrains sleep oscillations, we report a temporal clustering of spindles and SOs relative to the rocking cycle. Altogether, these findings demonstrate that a continuous rocking stimulation strengthens deep sleep via the neural entrainment of intrinsic sleep oscillations.

Monday, February 25, 2019

Female brains are younger than male brains.

Fascinating, from Goyal et al. (open source article):

Significance
Prior work has identified many sex differences in the brain, including during brain aging and in neurodegenerative diseases. Notably, many of these studies are performed by comparing age-matched females and males. Evolutionary theorists have predicted that females might have more youthful brains (neoteny) as compared with males, but until now findings in support of this theory have been limited to postmortem transcriptional analysis, some of which is contradictory. To test this hypothesis in vivo, we analyzed sex differences in a unique brain PET dataset in over 200 normal human adults across the adult life span. We find that in terms of brain metabolism, the adult female brain is on average a few years younger than the male brain.
Abstract
Sex differences influence brain morphology and physiology during both development and aging. Here we apply a machine learning algorithm to a multiparametric brain PET imaging dataset acquired in a cohort of 20- to 82-year-old, cognitively normal adults (n = 205) to define their metabolic brain age. We find that throughout the adult life span the female brain has a persistently lower metabolic brain age—relative to their chronological age—compared with the male brain. The persistence of relatively younger metabolic brain age in females throughout adulthood suggests that development might in part influence sex differences in brain aging. Our results also demonstrate that trajectories of natural brain aging vary significantly among individuals and provide a method to measure this.

Friday, February 22, 2019

Ohmygawd.....warms of drones with one distributed brain.

I have to pass on the scary video below, pointed to by a recent WaPo article. Reminds me of the fighter battles in the Star Wars movies. Wars of the future are going to be increasingly digital, hopefully less damaging to real human bodies.
Dr. Will Roper, who now serves as Assistant Air Force Secretary, described that swarm in a Jan. 2017 as “a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature."

Thursday, February 21, 2019

Watching social influence start to bias perceptual integration as children develop

From Large et al.:
The opinions of others have a profound influence on decision making in adults. The impact of social influence appears to change during childhood, but the underlying mechanisms and their development remain unclear. We tested 125 neurotypical children between the ages of 6 and 14 years on a perceptual decision task about 3D-motion figures under informational social influence. In these children, a systematic bias in favor of the response of another person emerged at around 12 years of age, regardless of whether the other person was an age-matched peer or an adult. Drift diffusion modeling indicated that this social influence effect in neurotypical children was due to changes in the integration of sensory information, rather than solely a change in decision behavior. When we tested a smaller cohort of 30 age- and IQ-matched autistic children on the same task, we found some early decision bias to social influence, but no evidence for the development of systematic integration of social influence into sensory processing for any age group. Our results suggest that by the early teens, typical neurodevelopment allows social influence to systematically bias perceptual processes in a visual task previously linked to the dorsal visual stream. That the same bias did not appear to emerge in autistic adolescents in this study may explain some of their difficulties in social interactions.

Wednesday, February 20, 2019

Fighting social media misinformation using crowdsourced judgements

From Pennycook and Rand:

Significance
Many people consume news via social media. It is therefore desirable to reduce social media users’ exposure to low-quality news content. One possible intervention is for social media ranking algorithms to show relatively less content from sources that users deem to be untrustworthy. But are laypeople’s judgments reliable indicators of quality, or are they corrupted by either partisan bias or lack of information? Perhaps surprisingly, we find that laypeople—on average—are quite good at distinguishing between lower- and higher-quality sources. These results indicate that incorporating the trust ratings of laypeople into social media ranking algorithms may prove an effective intervention against misinformation, fake news, and news content with heavy political bias.
Abstract
Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.

Tuesday, February 19, 2019

Sleeping in standby mode

In the editor's choice section of the current Science Magazine, Claudia Pama points to work by Legendre et al. showing that sleepers process surrounding events sufficiently to know when it might be a good idea to rapidly wake up. Here is the article abstract:
Sleep is a vital need, forcing us to spend a large portion of our life unable to interact with the external world. Current models interpret such extreme vulnerability as the price to pay for optimal learning. Sleep would limit external interferences on memory consolidation and allow neural systems to reset through synaptic downscaling. Yet, the sleeping brain continues generating neural responses to external events, revealing the preservation of cognitive processes ranging from the recognition of familiar stimuli to the formation of new memory representations. Why would sleepers continue processing external events and yet remain unresponsive? Here we hypothesized that sleepers enter a ‘standby mode’ in which they continue tracking relevant signals, finely balancing the need to stay inward for memory consolidation with the ability to rapidly awake when necessary. Using electroencephalography to reconstruct competing streams in a multitalker environment, we demonstrate that the sleeping brain amplifies meaningful speech compared to irrelevant signals. However, the amplification of relevant stimuli was transient and vanished during deep sleep. The effect of sleep depth could be traced back to specific oscillations, with K-complexes promoting relevant information in light sleep, whereas slow waves actively suppress relevant signals in deep sleep. Thus, the selection of relevant stimuli continues to operate during sleep but is strongly modulated by specific brain rhythms.

Monday, February 18, 2019

Brain patterns of consciousness

A study that is the product of a collaboration across seven countries has identified brain signatures that can indicate consciousness without relying on self-report or the need to ask patients to engage in a particular task. It demonstrates that conscious and unconscious patients can be differentiated after brain injury. The brain activity patterns of injury patients that are unconscious are similar to those observed in normal subjects under deep anaesthesia.

A clip from a summary of the work:


We found two main patterns of communication across regions. One simply reflected physical connections of the brain, such as communication only between pairs of regions that have a direct physical link between them. This was seen in patients with virtually no conscious experience.
One represented very complex brain-wide dynamic interactions across a set of 42 brain regions that belong to six brain networks with important roles in cognition (see image above). This complex pattern was almost only present in people with some level of consciousness.
Importantly, this complex pattern disappeared when patients were under deep anaesthesia, confirming that our methods were indeed sensitive to the patients' level of consciousness and not their general brain damage or external responsiveness.
Here is the main article's abstract:
Adopting the framework of brain dynamics as a cornerstone of human consciousness, we determined whether dynamic signal coordination provides specific and generalizable patterns pertaining to conscious and unconscious states after brain damage. A dynamic pattern of coordinated and anticoordinated functional magnetic resonance imaging signals characterized healthy individuals and minimally conscious patients. The brains of unresponsive patients showed primarily a pattern of low interareal phase coherence mainly mediated by structural connectivity, and had smaller chances to transition between patterns. The complex pattern was further corroborated in patients with covert cognition, who could perform neuroimaging mental imagery tasks, validating this pattern’s implication in consciousness. Anesthesia increased the probability of the less complex pattern to equal levels, validating its implication in unconsciousness. Our results establish that consciousness rests on the brain’s ability to sustain rich brain dynamics and pave the way for determining specific and generalizable fingerprints of conscious and unconscious states.

Friday, February 15, 2019

How brain and spinal cord distinguish self touch from the touch of others

I’ve always been struck by how much more effective someone else’s touch is than my own in relaxing tense muscles in my body. Boehme et al. show a neural basis for how we distinguish our self touch from that of others:
Differentiation between self-produced tactile stimuli and touch by others is necessary for social interactions and for a coherent concept of “self.” The mechanisms underlying this distinction are unknown. Here, we investigated the distinction between self- and other-produced light touch in healthy volunteers using three different approaches: fMRI, behavioral testing, and somatosensory-evoked potentials (SEPs) at spinal and cortical levels. Using fMRI, we found self–other differentiation in somatosensory and sociocognitive areas. Other-touch was related to activation in several areas, including somatosensory cortex, insula, superior temporal gyrus, supramarginal gyrus, striatum, amygdala, cerebellum, and prefrontal cortex. During self-touch, we instead found deactivation in insula, anterior cingulate cortex, superior temporal gyrus, amygdala, parahippocampal gyrus, and prefrontal areas. Deactivation extended into brain areas encoding low-level sensory representations, including thalamus and brainstem. These findings were replicated in a second cohort. During self-touch, the sensorimotor cortex was functionally connected to the insula, and the threshold for detection of an additional tactile stimulus was elevated. Differential encoding of self- vs. other-touch during fMRI correlated with the individual self-concept strength. In SEP, cortical amplitudes were reduced during self-touch, while latencies at cortical and spinal levels were faster for other-touch. We thus demonstrated a robust self–other distinction in brain areas related to somatosensory, social cognitive, and interoceptive processing. Signs of this distinction were evident at the spinal cord. Our results provide a framework for future studies in autism, schizophrenia, and emotionally unstable personality disorder, conditions where symptoms include social touch avoidance and poor self-vs.-other discrimination.

Thursday, February 14, 2019

The science of sway in musical ensembles.

I'm passing on this fascinating article that the violinist in my piano trio sent to her musician colleagues. Trainor's group at McMaster University documents how body sway reflects joint emotional expression in music ensemble performance.
Joint action is essential in daily life, as humans often must coordinate with others to accomplish shared goals. Previous studies have mainly focused on sensorimotor aspects of joint action, with measurements reflecting event-to-event precision of interpersonal sensorimotor coordination (e.g., tapping). However, while emotional factors are often closely tied to joint actions, they are rarely studied, as event-to-event measurements are insufficient to capture higher-order aspects of joint action such as emotional expression. To quantify joint emotional expression, we used motion capture to simultaneously measure the body sway of each musician in a trio (piano, violin, cello) during performances. Excerpts were performed with or without emotional expression. Granger causality was used to analyze body sway movement time series amongst musicians, which reflects information flow. Results showed that the total Granger-coupling of body sway in the ensemble was higher when performing pieces with emotional expression than without. Granger-coupling further correlated with the emotional intensity as rated by both the ensemble members themselves and by musician judges, based on the audio recordings alone. Together, our findings suggest that Granger-coupling of co-actors’ body sways reflects joint emotional expression in a music ensemble, and thus provide a novel approach to studying joint emotional expression.
Note: here is the authors' description of Granger causality:
Granger causality is a statistical estimation of the degree to which one time series is predicted by the history of another time series, over and above prediction by its own history. The larger the value of Granger causality, the better the prediction, and the more information that is flowing from one time series to another. Previous studies have shown that Granger causalities among performers’ motions in a music ensemble reflect leadership dynamics and thus information flow31,36,43, which are higher-order aspects of joint action.

Wednesday, February 13, 2019

Parents mention sons more often than daughters on social media.

From Sivak and Smirnov:

Significance
Parents’ preference for sons is a well-known phenomenon. This study examines whether the use of social media by parents is gender biased. Due to the large-scale use of social media, even a moderate bias might significantly contribute to gender inequality. We use data from a Russian social networking site on posts made by 635,665 users and find that parents mention sons more often than daughters and that posts featuring sons get more “likes.” This gender imbalance may send a message that girls are less important than boys or that they deserve less attention. Particularly in a country with an above-average ranking on gender parity, this invisible bias might present an intractable obstacle to gender equality.
Abstract
Gender inequality starts early in life. Parents tend to prefer boys over girls, which is manifested in reproductive behavior, marital life, and parents’ pastimes and investments in their children. While social media and sharing information about children (so-called “sharenting”) have become an integral part of parenthood, whether and how gender preference shapes the online behavior of users are not well known. In this paper we use public posts made by 635,665 users from Saint Petersburg on a popular Russian social networking site, to investigate public mentions of daughters and sons on social media. We find that both men and women mention sons more often than daughters in their posts. We also find that posts featuring sons receive more “likes” on average. Our results indicate that girls are underrepresented in parents’ digital narratives about their children, in a country with an above-average ranking on gender parity. This gender imbalance may send a message that girls are less important than boys or that they deserve less attention, thus reinforcing gender inequality from an early age.

Tuesday, February 12, 2019

Correlations between our gut microbiota and mental health.

Valles-Colomer et al. note variations in the levels of different groups of gut bacteria that correlate with depression, or with higher quality of life.
The relationship between gut microbial metabolism and mental health is one of the most intriguing and controversial topics in microbiome research. Bidirectional microbiota–gut–brain communication has mostly been explored in animal models, with human research lagging behind. Large-scale metagenomics studies could facilitate the translational process, but their interpretation is hampered by a lack of dedicated reference databases and tools to study the microbial neuroactive potential. Surveying a large microbiome population cohort (Flemish Gut Flora Project, n = 1,054) with validation in independent data sets (ntotal = 1,070), we studied how microbiome features correlate with host quality of life and depression. Butyrate-producing Faecalibacterium and Coprococcus bacteria were consistently associated with higher quality of life indicators. Together with Dialister, Coprococcus spp. were also depleted in depression, even after correcting for the confounding effects of antidepressants. Using a module-based analytical framework, we assembled a catalogue of neuroactive potential of sequenced gut prokaryotes. Gut–brain module analysis of faecal metagenomes identified the microbial synthesis potential of the dopamine metabolite 3,4-dihydroxyphenylacetic acid as correlating positively with mental quality of life and indicated a potential role of microbial γ-aminobutyric acid production in depression. Our results provide population-scale evidence for microbiome links to mental health, while emphasizing confounder importance.
Added note: Pennisi has done a commentary on this and similar work.

Monday, February 11, 2019

Oliver Sacks - The Machine Stops

I want to point MindBlog readers to a wonderful short essay written by Oliver Sacks before his death from cancer, in which he notes the parallels between the modern world he sees around him and E.M. Forster's prescient classic 1909 short story "The Machine Stops," in which Forster imagined a future in which humans lived in separate cells, communicating only by audio and visual devices (much like today the patrons of a bar at happy hour are more likely to looking at their cells phones than chatting with each other.) A few clips:
I cannot get used to seeing myriads of people in the street peering into little boxes or holding them in front of their faces, walking blithely in the path of moving traffic, totally out of touch with their surroundings. I am most alarmed by such distraction and inattention when I see young parents staring at their cell phones and ignoring their own babies as they walk or wheel them along. Such children, unable to attract their parents’ attention, must feel neglected, and they will surely show the effects of this in the years to come.
I am confronted every day with the complete disappearance of the old civilities. Social life, street life, and attention to people and things around one have largely disappeared, at least in big cities, where a majority of the population is now glued almost without pause to phones or other devices—jabbering, texting, playing games, turning more and more to virtual reality of every sort.
I worry more about the subtle, pervasive draining out of meaning, of intimate contact, from our society and our culture. When I was eighteen, I read Hume for the first time, and I was horrified by the vision he expressed in his eighteenth-century work “A Treatise of Human Nature,” in which he wrote that mankind is “nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement.” As a neurologist, I have seen many patients rendered amnesic by destruction of the memory systems in their brains, and I cannot help feeling that these people, having lost any sense of a past or a future and being caught in a flutter of ephemeral, ever-changing sensations, have in some way been reduced from human beings to Humean ones.
I have only to venture into the streets of my own neighborhood, the West Village, to see such Humean casualties by the thousand: younger people, for the most part, who have grown up in our social-media era, have no personal memory of how things were before, and no immunity to the seductions of digital life. What we are seeing—and bringing on ourselves—resembles a neurological catastrophe on a gigantic scale.
I see science, with its depth of thought, its palpable achievements and potentials, as equally important; and science, good science, is flourishing as never before, though it moves cautiously and slowly, its insights checked by continual self-testing and experimentation. I revere good writing and art and music, but it seems to me that only science, aided by human decency, common sense, farsightedness, and concern for the unfortunate and the poor, offers the world any hope in its present morass. This idea is explicit in Pope Francis’s encyclical and may be practiced not only with vast, centralized technologies but by workers, artisans, and farmers in the villages of the world. Between us, we can surely pull the world through its present crises and lead the way to a happier time ahead. As I face my own impending departure from the world, I have to believe in this—that mankind and our planet will survive, that life will continue, and that this will not be our final hour.

Friday, February 08, 2019

Flash judgement based on appearance.

I want to point to two articles on how we make rapid judgements based on first impressions.

Torodov's essay in aeon is a fascinating review of the history of studies on flash judgement of faces.  Here is  one clip:

Competence is perceived as the most important characteristic of a good politician. But what people perceive as an important characteristic can change in different situations. Imagine that it is wartime, and you must cast your presidential vote today. Would you vote for face A or face B in Figure 1 below? Most people quickly go with A. What if it is peacetime? Most people now go with B.

These images were created by the psychologist Anthony Little at the University of Bath and his colleagues in the UK. Face A is perceived as more dominant, more masculine, and a stronger leader – attributes that matter in wartime. Face B is perceived as more intelligent, forgiving, and likeable – attributes that matter more in peacetime. Now look at the images in Figure 2 below.

You should be able to recognise the former president George W Bush and the former secretary of state John Kerry. Back when the study was done, Kerry was the Democratic candidate running against Bush for the US presidency. Can you see some similarities between images A (Figure 1) and C (Figure 2), and between images B (Figure 1) and D (Figure 2)? The teaser is that the images in Figure 1 show what makes the faces of Bush and Kerry distinctive. To obtain the distinctiveness of a face, you need only find out what makes it different from an average face – in this case, a morph of about 30 male faces. The faces in Figure 1 were created by accentuating the differences between the shapes of Bush’s and Kerry’s faces and the average face shape. At the time of the election in 2004, the US was at war with Iraq. I will leave the rest to your imagination. 
What we consider an important characteristic also depends on our ideological inclinations. Take a look at Figure 3 below. Who would make a better leader?

...whereas liberal voters tend to choose the face on the left, conservative voters tend to choose the face on the right. These preferences reflect our ideological stereotypes of Right-wing, masculine, dominant-looking leaders, and Left-wing, feminine, non-dominant-looking leaders.
Hu et al. study first impressions of personality traits from body shapes.  Here is their abstract, followed by a summary figure click to enlarge):
People infer the personalities of others from their facial appearance. Whether they do so from body shapes is less studied. We explored personality inferences made from body shapes. Participants rated personality traits for male and female bodies generated with a three-dimensional body model. Multivariate spaces created from these ratings indicated that people evaluate bodies on valence and agency in ways that directly contrast positive and negative traits from the Big Five domains. Body-trait stereotypes based on the trait ratings revealed a myriad of diverse body shapes that typify individual traits. Personality-trait profiles were predicted reliably from a subset of the body-shape features used to specify the three-dimensional bodies. Body features related to extraversion and conscientiousness were predicted with the highest consensus, followed by openness traits. This study provides the first comprehensive look at the range, diversity, and reliability of personality inferences that people make from body shapes.

Figure - (click to enlarge). A subset of body stereotypes we created from single or multiple bodies that received extreme ratings on individual traits (for the entire set. The figure is organized to show sample traits with positive and negative combinations of valence (V) and agency (A). For example, Row 1 shows bodies that have negative valence (heavier) and negative agency (less shaped, more rectangular).

Thursday, February 07, 2019

A few anti-technology screeds:

German artist, filmmaker and writer Hito Steyerl writes an impassioned diatribe titled "Technology Has Destroyed Reality":
In this new age of artificial stupidity, technological disruption has turned destructive. Its greatest victim is reality itself...It is a far cry from an earlier period of digital expansion, when internet communication was seen to promote global exchange and understanding...The online world seemed like a Disney vision of multiculturalism, promoting sterile tolerance from above. Now technology divides and fragments; it identifies and ranks people.
The norms of reality TV have found a foothold in the digital age. The point is not that there is no reality — the point is that it’s every fact for itself, competing against all others, while social media multiply alternative versions of it...This is our real, existing digital world: nothing more than hourly waves of feverish and toxic agitation, played out over stale mainstream channels that discourage innovation and experimentation, drown in excruciating advertisements and drain people’s attention and souls...An unpopular truth cannot survive online in such a world, because traction is privileged over veracity. And to artificially stupid automatons and algorithms, reality is defined as brute quantity, by ranking, ratings and elimination.
And author Erin Griffith does two pieces. "Why Are Young People Pretending to Love Work?" very much describes the culture of techie 20-35 year old workers in Austin, Texas - where I now live - where performative workaholism and 18 hour work days have become a lifestyle.
“Owning one’s moment” is a clever way to rebrand “surviving the rat race.” In the new work culture, enduring or even merely liking one’s job is not enough. Workers should love what they do, and then promote that love on social media, thus fusing their identities to that of their employers...This is toil glamour, and it is going mainstream. Most visibly, WeWork — which investors recently valued at $47 billion — is on its way to becoming the Starbucks of office culture. It has exported its brand of performative workaholism to 27 countries...Rather than just renting desks, the company aims to encompass all aspects of people’s lives, in both physical and digital worlds...The ideal client, one imagines, is someone so enamored of the WeWork office aesthetic...that she sleeps in a WeLive apartment, works out at a Rise by We gym, and sends her children to a WeGrow school.
In the same vein see Roger Cohen's piece on the harm in hustle culture.

Griffith's second piece, "The Other Tech Bubble", describes the growing disenchantment with tech companies:
As headlines have exposed the troubling inner workings of company after company, startup culture no longer feels like fodder for gentle parodies about ping pong and hoodies. It feels ugly and rotten. Facebook, the greatest startup success story of this era, isn’t a merry band of hackers building cutesy tools that allow you to digitally Poke your friends. It’s a powerful and potentially sinister collector of personal data, a propaganda partner to government censors, and an enabler of discriminatory advertising.
In 2008, it was Wall Street bankers. In 2017, tech workers are the world’s villain. “It’s the exact same story of too many people with too much money. That breeds arrogance, bad behavior, and jealousy, and society just loves to take it down,” the investor said. As a result, investors are avoiding anything that feels risky. Hunter Walk, a partner with venture capital firm Homebrew, which invested in Bodega, attributes the backlash to a broader response to power. Tech is now a powerful institution, he says. “We no longer get the benefit of the doubt 100 percent of the time, and that’s okay.”
Evidence is mounting that that the world is no longer fascinated with Silicon Valley: It’s disturbed by its callous behavior. But it will take a massive shift to introduce self-awareness to an industry that has always assumed it was changing the world for the better. Cynics would argue it doesn’t matter. The big tech companies are too big to fail, too complicated to be parsed or regulated, and too integral to business, the economy, and day-to-day life. We’re not going to abandon our cell phones or social networks. This is how we live now.
And finally, a cartoon from the NYTimes on our evolution:

Wednesday, February 06, 2019

Successes of advantaged group members get more attention.

Kteily et al. show how political ideology shapes the amplification of the accomplishments of disadvantaged vs. advantaged group members:

Significance
Inequality prospers when successes of advantaged group members (e.g., men, whites) get more attention than equivalent successes of disadvantaged group members (e.g., women, blacks). What determines whose successes individuals deem worth promoting vs. those they ignore? Using hundreds of thousands of tweets from the 2016 Olympics, we show that liberals are much more likely than conservatives to shine a spotlight on black and female (vs. white and male) US gold medalists. Two further experiments provide evidence that such differential amplification of successful targets is driven by a motivation—higher among liberals—to raise disadvantaged groups’ standing in service of equality. We find that liberals drive differential amplification more than conservatives and establish a behavioral mechanism through which liberals’ egalitarian motives manifest.
Abstract
Recent years have witnessed an increased public outcry in certain quarters about a perceived lack of attention given to successful members of disadvantaged groups relative to equally meritorious members of advantaged groups, exemplified by social media campaigns centered around hashtags, such as #OscarsSoWhite and #WomenAlsoKnowStuff. Focusing on political ideology, we investigate here whether individuals differentially amplify successful targets depending on whether these targets belong to disadvantaged or advantaged groups, behavior that could help alleviate or entrench group-based disparities. Study 1 examines over 500,000 tweets from over 160,000 Twitter users about 46 unambiguously successful targets varying in race (white, black) and gender (male, female): American gold medalists from the 2016 Olympics. Leveraging advances in computational social science, we identify tweeters’ political ideology, race, and gender. Tweets from political liberals were much more likely than those from conservatives to be about successful black (vs. white) and female (vs. male) gold medalists (and especially black females), controlling for tweeters’ own race and gender, and even when tweeters themselves were white or male (i.e., advantaged group members). Studies 2 and 3 provided experimental evidence that liberals are more likely than conservatives to differentially amplify successful members of disadvantaged (vs. advantaged) groups and suggested that this is driven by liberals’ heightened concern with social equality. Addressing theorizing about ideological asymmetries, we observed that political liberals are more responsible than conservatives for differential amplification. Our results highlight ideology’s polarizing power to shape even whose accomplishments we promote, and extend theorizing about behavioral manifestations of egalitarian motives.

Tuesday, February 05, 2019

The decay of cognitive diversity - global WEIRDing is upon us

I recommend that you read this brief piece by Kensy Cooperrider, a cognitive scientist in the Department of Psychology at the University of Chicago. He notes that just as we are beginning to appreciate that most behavioral sciences have focused on a small sliver of humanity - people from Western, educated, industrialised, rich, democratic societies (i.e. WEIRD) - younger generations of the indigenous people across the world who have had distinctive and different ways of parsing the world are becoming increasingly WEIRD. A few clips:
For centuries, Inuit hunters navigated the Arctic by consulting wind, snow and sky. Now they use GPS. Speakers of the aboriginal language Gurindji, in northern Australia, used to command 28 variants of each cardinal direction. Children there now use the four basic terms, and they don’t use them very well. In the arid heights of the Andes, the Aymara developed an unusual way of understanding time, imagining the past as in front of them, and the future at their backs. But for the youngest generation of Aymara speakers – increasingly influenced by Spanish – the future lies ahead.
Cooperrider proceeds to describe numerous examples of cognitive diversity that are disappearing, differences in how we relate to space, time, numbers, nature, each other, how we filter our experiences and allocate our attention.

Monday, February 04, 2019

Facial masculinity does not indicate immunocompetence

From Zeidi et al.:

Significance
Facial masculinity has been considered a sexual ornament in humans, akin to peacock tails and stag antlers. Recently, studies have questioned the once-popular view that facial masculinity is a condition-dependent male ornament signaling immunocompetence (the immunocompetence handicap hypothesis). We sought to rigorously test these ideas using high-resolution phenotypic (3D facial images) and genetic data in the largest sample to date. We found no support for the immunocompetence handicap hypothesis of facial masculinity in humans. Our findings add to a growing body of evidence challenging a popular viewpoint in the field and highlight the need for a deeper understanding of the genetic and environmental factors underlying variation in facial masculinity and human sexual dimorphism more broadly.
Abstract
Recent studies have called into question the idea that facial masculinity is a condition-dependent male ornament that indicates immunocompetence in humans. We add to this growing body of research by calculating an objective measure of facial masculinity/femininity using 3D images in a large sample (n = 1,233) of people of European ancestry. We show that facial masculinity is positively correlated with adult height in both males and females. However, facial masculinity scales with growth similarly in males and females, suggesting that facial masculinity is not exclusively a male ornament, as male ornaments are typically more sensitive to growth in males compared with females. Additionally, we measured immunocompetence via heterozygosity at the major histocompatibility complex (MHC), a widely-used genetic marker of immunity. We show that, while height is positively correlated with MHC heterozygosity, facial masculinity is not. Thus, facial masculinity does not reflect immunocompetence measured by MHC heterozygosity in humans. Overall, we find no support for the idea that facial masculinity is a condition-dependent male ornament that has evolved to indicate immunocompetence.

Friday, February 01, 2019

Oligarchia - the rise of autonomous analog computing

I want to pass on the final paragraphs of a piece done by George Dyson for Edge.org:
We assume that a search engine company builds a model of human knowledge and allows us to query that model, or that some other company (or maybe it’s the same company) builds a model of road traffic and allows us to access that model, or that yet another company builds a model of the social graph and allows us to join that model — for a price we are not quite told. This fits our preconceptions that an army of programmers is still in control somewhere but it is no longer the way the world now works.
The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.
These new hybrid organizations, although built upon digital computers, are operating as analog computers on a vast, global scale, processing information as continuous functions and treating streams of bits the way vacuum tubes treat streams of electrons, or the way neurons treat information in a brain. Large hybrid analog/digital computer networks, in the form of economies, have existed for a long time, but for most of history the information circulated at the speed of gold and silver and only recently at the speed of light.
We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves. The new gatekeepers, by controlling the flow of information, rule a growing sector of the world.
What deserves our full attention is not the success of a few companies that have harnessed the powers of hybrid analog/digital computing, but what is happening as these powers escape into the wild and consume the rest of the world.
The next revolution will be the ascent of analog systems over which the dominion of digital programming comes to an end. Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.

Thursday, January 31, 2019

Feedspot: MindBlog in top 100 psychology blogs

I am clueless about monitoring traffic on MindBlog, have been puzzled why I get 3-5 emails a week wanting to contribute content, share content, sell me some editing or other services, assist in 'monetizing' my site more effectively. Google is constantly on my case to place advertisements on MindBlog. My cut and paste answer to all such emails is "I must decline your kind offer. MindBlog is my own idiosyncratic hobby, and I only post content that I initiate. I am not concerned about number of followers, and have no interest in revenue." A recent offer to provide editing services finally clued me in to at least one source that contributes to all these emails. They sent their solicitation to the top 100 Psychology Blog identified by feedsport.com. Turns out that MindBlog is currently number 46.