Showing posts with label language. Show all posts
Showing posts with label language. Show all posts

Wednesday, February 02, 2022

How fast people respond to each other is a metric of social connection.

From Templeton et al.:  

Significance

Social connection is critical for our mental and physical health yet assessing and measuring connection has been challenging. Here, we demonstrate that a feature intrinsic to conversation itself—the speed with which people respond to each other—is a simple, robust, and sufficient metric of social connection. Strangers and friends feel more connected when their conversation partners respond quickly. Because extremely short response times (less than 250 ms) preclude conscious control, they provide an honest signal that even eavesdroppers use to judge how well two people “click.”
Abstract
Clicking is one of the most robust metaphors for social connection. But how do we know when two people "click"? We asked pairs of friends and strangers to talk with each other and rate their felt connection. For both friends and strangers, speed in response was a robust predictor of feeling connected. Conversations with faster response times felt more connected than conversations with slower response times, and within conversations, connected moments had faster response times than less-connected moments. This effect was determined primarily by partner responsivity: People felt more connected to the degree that their partner responded quickly to them rather than by how quickly they responded to their partner. The temporal scale of these effects (less than 250 ms) precludes conscious control, thus providing an honest signal of connection. Using a round-robin design in each of six closed networks, we show that faster responders evoked greater feelings of connection across partners. Finally, we demonstrate that this signal is used by third-party listeners as a heuristic of how well people are connected: Conversations with faster response times were perceived as more connected than the same conversations with slower response times. Together, these findings suggest that response times comprise a robust and sufficient signal of whether two minds “click.”

Friday, January 28, 2022

The stories we imagine while listening to music depends on our culture.

Margulis et al. (text from their introduction)
....compared quantitative measures of narrativity (the likelihood that an excerpt of music triggers a story in listeners minds) and narrative engagement (how vivid and clear the events of the story are in listeners minds) for a large set of musical excerpts from Western and Chinese musical traditions for listeners in the same three distinct geographical locations as the present investigation—two suburban college towns in the US Midwest and one rural village in the Chinese province of Guizhou. Results showed that people in all three locations readily narrativize to excerpts (i.e., narrativity scores were quite high) with varying levels of narrative engagement for both Western and Chinese instrumental music; moreover, people do so with about the same degree regardless of location. Notably, however, although both excerpt narrativity and narrative engagement scores were highly correlated across the two US locations, they were not correlated (not predictive) for cross-cultural comparisons between listeners in both of the US locations and the remote rural village in Guizhou.
Here is the article's abstsract:
The scientific literature sometimes considers music an abstract stimulus, devoid of explicit meaning, and at other times considers it a universal language. Here, individuals in three geographically distinct locations spanning two cultures performed a highly unconstrained task: they provided free-response descriptions of stories they imagined while listening to instrumental music. Tools from natural language processing revealed that listeners provide highly similar stories to the same musical excerpts when they share an underlying culture, but when they do not, the generated stories show limited overlap. These results paint a more complex picture of music’s power: music can generate remarkably similar stories in listeners’ minds, but the degree to which these imagined narratives are shared depends on the degree to which culture is shared across listeners. Thus, music is neither an abstract stimulus nor a universal language but has semantic affordances shaped by culture, requiring more sustained attention from psychology.

Tuesday, January 25, 2022

Using big data to track major shifts in human cognition

I want to pass on the first few paragraphs of a fascinating commentary by Simon DeDao on an article by Scheffer et al. that was the subject of MindBlog's 12/31/21 post. Motivated readers can obtain a copy of the whole article by emailing me.:
Scheffer et al.’s (1) exciting new work reports an historic rearrangement, occurring in the late 20th century, of the balance between reason and emotion. Its approach is part of a new trend in the psychological sciences that uses extremely large volumes of text to study basic patterns of human cognition. Recent work in this vein has included studies of the universal properties of gender representations (2), the rise of causal thinking (3), and a cognitive bias towards positivity in language itself (4). The goal of going “from text to thought” (5) is an attractive one, and the promise of the machine learning era is that we will only get better at extracting the imprints left, in text, by the mechanisms of the mind.
To establish their claims, Scheffer et al. (1) use principal component analysis to identify two major polarities of correlated vocabulary words in the Google Books corpus (6). The first polarity (PC1) tracks a shift from archaic to modern, in both material life (“iron” is archaic, “computer” is modern) and culture (“liberty” is archaic, “privacy” is modern). The second polarity (PC2) that emerges is the intriguing one, and forms the basis of their paper: Its two poles, the authors argue, correspond to the distinction between “rational” and “intuitive” language.
Their main finding then has two pieces: a shift from the intuitive pole to the rational pole (the “rise” of rationality) and then back (the “fall”) (1). The rise has begun by the start of their data in 1850, and unfolds over the course of a century or more. They attribute it to a society increasingly concerned with quantifying, and justifying, the world through scientific and impersonal language—a gradual tightening of Max Weber’s famous “iron cage” of collectivized, rationalized bureaucracy in service of the capitalist profit motive (7). The fall, meaning a shift from the rational back to the intuitive, begins in 1980, and is more rapid than the rise: By 2020, the balance is similar to that seen in the early 1900s. The fall appears to accelerate in the early 2000s, which leads the authors to associate it with social media use and a “post-truth era” where “feelings trump facts.” Both these interpretations are supported by accompanying shifts toward “collective” pronouns (we, our, and they) in the Weberian period, and then toward the “individualistic” ones (I, my, he, and she) after.
The raw effect sizes the authors report are extraordinarily large (1). At the peak in 1980, rationality words outnumbered intuition words, on average, three to one. Forty years later (and 100 y earlier), however, the balance was roughly one to one. If these represent changes in actual language use, let alone the time devoted to the underlying cognitive processes, they are enormous shifts in the nature of human experience.
1. M. Scheffer, I. van de Leemput, E. Weinans, J. Bollen, The rise and fall of rationality in language. Proc. Natl. Acad. Sci. U.S.A. 118, e2107848118 (2021).
2. T. E. S. Charlesworth, V. Yang, T. C. Mann, B. Kurdi, M. R. Banaji, Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychol. Sci. 32, 218–240 (2021).
3. R. Iliev, R. Axelrod, Does causality matter more now? Increase in the proportion of causal language in English texts. Psychol. Sci. 27, 635–643 (2016).
4. P. S. Dodds et al, Human language reveals a universal positivity bias. Proc. Natl. Acad. Sci. U.S.A. 112, 2389–2394 (2015).
5. J. C. Jackson et al, From text to thought: How analyzing language can advance psychological science. Perspect. Psychol. Sci., 10.117/17456916211004899 (2021).
6. J. B. Michel et al.; Google Books Team, Quantitative analysis of culture using millions of digitized books. Science 331, 176–182 (2011).

Friday, January 14, 2022

Unlocking adults’ implicit statistical learning by cognitive depletion

Smalle et al. make the fascinating observation that inhibition of our adult cognitive control system by non-invasive brain stimulation can unleash some of our infant implicit statistical learning abilities - the learning of novel words embedded in a string of spoken syllables. This suggests that adult language learning is antagonized by higher cognitive mechanisms.  

Significance

Statistical learning mechanisms enable extraction of patterns in the environment from infancy to adulthood. For example, they enable segmentation of continuous speech streams into novel words. Adults typically become aware of the hidden words even when passively listening to speech streams. It remains poorly understood how cognitive development and brain maturation affect implicit statistical learning (i.e., infant-like learning without awareness). Here, we show that the depletion of the cognitive control system by noninvasive brain stimulation or by demanding cognitive tasks boosts adults’ implicit but not explicit word-segmentation abilities. These findings suggest that the adult cognitive architecture constrains statistical learning mechanisms that are likely to contribute to early language acquisition and opens avenues to enhance language-learning abilities in adults.
Abstract
Human learning is supported by multiple neural mechanisms that maturate at different rates and interact in mostly cooperative but also sometimes competitive ways. We tested the hypothesis that mature cognitive mechanisms constrain implicit statistical learning mechanisms that contribute to early language acquisition. Specifically, we tested the prediction that depleting cognitive control mechanisms in adults enhances their implicit, auditory word-segmentation abilities. Young adults were exposed to continuous streams of syllables that repeated into hidden novel words while watching a silent film. Afterward, learning was measured in a forced-choice test that contrasted hidden words with nonwords. The participants also had to indicate whether they explicitly recalled the word or not in order to dissociate explicit versus implicit knowledge. We additionally measured electroencephalography during exposure to measure neural entrainment to the repeating words. Engagement of the cognitive mechanisms was manipulated by using two methods. In experiment 1 (n = 36), inhibitory theta-burst stimulation (TBS) was applied to the left dorsolateral prefrontal cortex or to a control region. In experiment 2 (n = 60), participants performed a dual working-memory task that induced high or low levels of cognitive fatigue. In both experiments, cognitive depletion enhanced word recognition, especially when participants reported low confidence in remembering the words (i.e., when their knowledge was implicit). TBS additionally modulated neural entrainment to the words and syllables. These findings suggest that cognitive depletion improves the acquisition of linguistic knowledge in adults by unlocking implicit statistical learning mechanisms and support the hypothesis that adult language learning is antagonized by higher cognitive mechanisms.

Friday, December 31, 2021

The rise and fall of rationality in language

Fascinating language analysis from Scheffer et al. that illustrates over the past decades a marked shift in public interest from the collective to the individual, and from rationality toward emotion:  

Significance

The post-truth era has taken many by surprise. Here, we use massive language analysis to demonstrate that the rise of fact-free argumentation may perhaps be understood as part of a deeper change. After the year 1850, the use of sentiment-laden words in Google Books declined systematically, while the use of words associated with fact-based argumentation rose steadily. This pattern reversed in the 1980s, and this change accelerated around 2007, when across languages, the frequency of fact-related words dropped while emotion-laden language surged, a trend paralleled by a shift from collectivistic to individualistic language.
Abstract
The surge of post-truth political argumentation suggests that we are living in a special historical period when it comes to the balance between emotion and reasoning. To explore if this is indeed the case, we analyze language in millions of books covering the period from 1850 to 2019 represented in Google nGram data. We show that the use of words associated with rationality, such as “determine” and “conclusion,” rose systematically after 1850, while words related to human experience such as “feel” and “believe” declined. This pattern reversed over the past decades, paralleled by a shift from a collectivistic to an individualistic focus as reflected, among other things, by the ratio of singular to plural pronouns such as “I”/”we” and “he”/”they.” Interpreting this synchronous sea change in book language remains challenging. However, as we show, the nature of this reversal occurs in fiction as well as nonfiction. Moreover, the pattern of change in the ratio between sentiment and rationality flag words since 1850 also occurs in New York Times articles, suggesting that it is not an artifact of the book corpora we analyzed. Finally, we show that word trends in books parallel trends in corresponding Google search terms, supporting the idea that changes in book language do in part reflect changes in interest. All in all, our results suggest that over the past decades, there has been a marked shift in public interest from the collective to the individual, and from rationality toward emotion.

Monday, November 29, 2021

An artificial neural network that responds to written words like our brain's word form area

Interesting work from Dehaene and collaborators:  

Significance

Learning to read results in the formation of a specialized region in the human ventral visual cortex. This region, the visual word form area (VWFA), responds selectively to written words more than to other visual stimuli. However, how neural circuits at this site implement an invariant recognition of written words remains unknown. Here, we show how an artificial neural network initially designed for object recognition can be retrained to recognize words. Once literate, the network develops a sparse neuronal representation of words that replicates several known aspects of the cognitive neuroscience of reading and leads to precise predictions concerning how a small set of neurons implement the orthographic stage of reading acquisition using a compositional neural code.
Abstract
The visual word form area (VWFA) is a region of human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and systematically responds to written words in literate individuals. According to the neuronal recycling hypothesis, this region arises through the repurposing, for letter recognition, of a subpart of the ventral visual pathway initially involved in face and object recognition. Furthermore, according to the biased connectivity hypothesis, its reproducible localization is due to preexisting connections from this subregion to areas involved in spoken-language processing. Here, we evaluate those hypotheses in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize pictures and then to recognize written words invariantly for case, font, and size. We show that the model can account for many properties of the VWFA, particularly when a subset of units possesses a biased connectivity to word output units. The network develops a sparse, invariant representation of written words, based on a restricted set of reading-selective units. Their activation mimics several properties of the VWFA, and their lesioning causes a reading-specific deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned either to individual letters and their ordinal position relative to word start or word ending or to pairs of letters (bigrams).

Tuesday, November 23, 2021

Socrates, Diderot, and Wolpert on Writing and Printing

I have to pass on these quotes sent by one my Chaos and Complexity Seminar colleagues at the University of Wisconsin:
Socrates on writing, from Phaedrus, 275a-b
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Denis Diderot, Encyclopédie, 1755
"As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes."
Lewis Wolpert (1929--2021) Lewis Wolpert - Scientist - Web of Stories
"Reading rots the mind."

Monday, November 15, 2021

Coevolution of tool use and language - shared syntactic processes and basal ganglia substrates

Thibault et al. show that tool use and language share syntactic processes. Functional magnetic resonance imaging reveals that tool use and syntax in language elicit similar patterns of brain activation within the basal ganglia. This indicates common neural resources for the two abilities. Indeed, learning transfer occurs across the two domains: Tool-use motor training improves syntactic processing in language and, reciprocally, linguistic training with syntactic structures improves tool use. Here is their entire structured abstract:   

INTRODUCTION

Tool use is a hallmark of human evolution. Beyond its sensorimotor components, the complexity of which has been extensively investigated, tool use affects cognition from a different perspective. Indeed, tool use requires integrating an external object as a body part and embedding its functional structure in the motor program. This adds a hierarchical level into the motor plan of manual actions, subtly modifying the relationship between interdependent subcomponents. Embedded structures also exist in language, and syntax is the cognitive function handling these linguistic hierarchies. One example is center-embedded object-relative clauses: “The poet [that the scientist admires] reads the paper.” Accordingly, researchers have advanced a role for syntax in action and the existence of similarities between the processes underlying tool use and language, so that shared neural resources for a common cognitive function could be at stake.
RATIONALE
We first tested the existence of shared neural substrates for tool use and syntax in language. Second, we tested the prediction that training one ability should affect performance in the other. In a first experiment, we measured participants’ brain activity with functional magnetic resonance imaging during tool use or, as a control, manual actions. In separate runs, the same participants performed a linguistic task on complex syntactic structures. We looked for common activations between tool use and the linguistic task, predicting similar patterns of activity if they rely on common neural resources. In further behavioral experiments, we tested whether motor training with the tool selectively improves syntactic performance in language and if syntactic training in language, in turn, selectively improves motor performance with the tool.
RESULTS
Tool-use planning and complex syntax processing (i.e., object relatives) elicited neural activity anatomically colocalized within the basal ganglia. A control experiment ruled out verbal working memory and manual (i.e., without a tool) control processes as an underlying component of this overlap. Multivariate analyses revealed similar spatial distributions of neural patterns prompted by tool-use planning and object-relative processing. This agrees with the recruitment of the same neural resources by both abilities and with the existence of a supramodal syntactic function. The shared neurofunctional resources were moreover reflected behaviorally by cross-domain learning transfer. Indeed, tool-use training significantly improved linguistic performance with complex syntactic structures. No learning transfer was observed on language syntactic abilities if participants trained without the tool. The reverse was also true: Syntactic training with complex sentences improved motor performance with the tool more than motor performance in a task without the tool and matched for sensorimotor difficulty. No learning transfer was observed on tool use if participants trained with simpler syntactic structures in language.
CONCLUSION
These findings reveal the existence of a supramodal syntactic function that is shared between language and motor processes. As a consequence, training tool-use abilities improves linguistic syntax and, reciprocally, training linguistic syntax abilities improves tool use. The neural mechanisms allowing for boosting performance in one domain by training syntax in the other may involve priming processes through preactivation of common neural resources, as well as short-term plasticity within the shared network. Our findings point to the basal ganglia as the neural site of supramodal syntax that handles embedded structures in either domain and also support longstanding theories of the coevolution of tool use and language in humans.

Wednesday, November 10, 2021

Computational evidence that predictive processing shapes language comprehension mechanisms in the brain.

Having just posted a lecture on predictive processing that I gave two days ago, I come across this fascinating work from Schrimpf et al.:  

Significance

Language is a quintessentially human ability. Research has long probed the functional architecture of language in the mind and brain using diverse neuroimaging, behavioral, and computational modeling approaches. However, adequate neurally-mechanistic accounts of how meaning might be extracted from language are sorely lacking. Here, we report a first step toward addressing this gap by connecting recent artificial neural networks from machine learning to human recordings during language processing. We find that the most powerful models predict neural and behavioral responses across different datasets up to noise levels. Models that perform better at predicting the next word in a sequence also better predict brain measurements—providing computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the brain.
Abstract
The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species’ signature cognitive skill. We find that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models’ neural fits (“brain score”) and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.

Friday, September 03, 2021

Babbling bats

An interesting piece from Fernandez et al.:
Babbling is a production milestone in infant speech development. Evidence for babbling in nonhuman mammals is scarce, which has prevented cross-species comparisons. In this study, we investigated the conspicuous babbling behavior of Saccopteryx bilineata, a bat capable of vocal production learning. We analyzed the babbling of 20 bat pups in the field during their 3-month ontogeny and compared its features to those that characterize babbling in human infants. Our findings demonstrate that babbling in bat pups is characterized by the same eight features as babbling in human infants, including the conspicuous features reduplication and rhythmicity. These parallels in vocal ontogeny between two mammalian species offer future possibilities for comparison of cognitive and neuromolecular mechanisms and adaptive functions of babbling in bats and humans.

Thursday, June 10, 2021

Storytelling increases oxytocin and positive emotions, decreases cortisol and pain, in hospitalized kids

From Brockington et al.:
Storytelling is a distinctive human characteristic that may have played a fundamental role in humans’ ability to bond and navigate challenging social settings throughout our evolution. However, the potential impact of storytelling on regulating physiological and psychological functions has received little attention. We investigated whether listening to narratives from a storyteller can provide beneficial effects for children admitted to intensive care units. Biomarkers (oxytocin and cortisol), pain scores, and psycholinguistic associations were collected immediately before and after storytelling and an active control intervention (solving riddles that also involved social interaction but lacked the immersive narrative aspect). Compared with the control group, children in the storytelling group showed a marked increase in oxytocin combined with a decrease in cortisol in saliva after the 30-min intervention. They also reported less pain and used more positive lexical markers when describing their time in hospital. Our findings provide a psychophysiological basis for the short-term benefits of storytelling and suggest that a simple and inexpensive intervention may help alleviate the physical and psychological pain of hospitalized children on the day of the intervention.

Friday, April 16, 2021

Vision: What’s so special about words?

Readers are sensitive to the statistics of written language. New work by Vidal et al. suggests that this sensitivity may be driven by the same domain-general mechanisms that enable the visual system to detect statistical regularities in the visual environment. 

Highlights

• Readers presented with orthographic-like stimuli are sensitive to bigram frequencies 
• An analogous effect emerges with images of made-up objects and visual gratings 
• These data suggest that the reading system might rely on general-purpose mechanisms 
• This calls for considering reading in the broader context of visual neuroscience
Summary
As writing systems are a relatively novel invention (slightly over 5 kya), they could not have influenced the evolution of our species. Instead, reading might recycle evolutionary older mechanisms that originally supported other tasks and preceded the emergence of written language. Accordingly, it has been shown that baboons and pigeons can be trained to distinguish words from nonwords based on orthographic regularities in letter co-occurrence. This suggests that part of what is usually considered reading-specific processing could be performed by domain-general visual mechanisms. Here, we tested this hypothesis in humans: if the reading system relies on domain-general visual mechanisms, some of the effects that are often found with orthographic material should also be observable with non-orthographic visual stimuli. We performed three experiments using the same exact design but with visual stimuli that progressively departed from orthographic material. Subjects were passively familiarized with a set of composite visual items and tested in an oddball paradigm for their ability to detect novel stimuli. Participants showed robust sensitivity to the co-occurrence of features (“bigram” coding) with strings of letter-like symbols but also with made-up 3D objects and sinusoidal gratings. This suggests that the processing mechanisms involved in the visual recognition of novel words also support the recognition of other novel visual objects. These mechanisms would allow the visual system to capture statistical regularities in the visual environment. We hope that this work will inspire models of reading that, although addressing its unique aspects, place it within the broader context of vision.

Monday, March 02, 2020

Speech versus music in the brain

Peter Stern summarizes work of Albouy et al. in the current issue of Science Magazine:
To what extent does the perception of speech and music depend on different mechanisms in the human brain? What is the anatomical basis underlying this specialization? Albouy et al. created a corpus of a cappella songs that contain both speech (semantic) and music (melodic) information and degraded each stimulus selectively in either the temporal or spectral domain. Degradation of temporal information impaired speech recognition but not melody recognition, whereas degradation of spectral information impaired melody recognition but not speech recognition. Brain scanning revealed a right-left asymmetry for speech and music. Classification of speech content occurred exclusively in the left auditory cortex, whereas classification of melodic content occurred only in the right auditory cortex.
And here is the Albouy et al. abstract:
Does brain asymmetry for speech and music emerge from acoustical cues or from domain-specific neural networks? We selectively filtered temporal or spectral modulations in sung speech stimuli for which verbal and melodic content was crossed and balanced. Perception of speech decreased only with degradation of temporal information, whereas perception of melodies decreased only with spectral degradation. Functional magnetic resonance imaging data showed that the neural decoding of speech and melodies depends on activity patterns in left and right auditory regions, respectively. This asymmetry is supported by specific sensitivity to spectrotemporal modulation rates within each region. Finally, the effects of degradation on perception were paralleled by their effects on neural classification. Our results suggest a match between acoustical properties of communicative signals and neural specializations adapted to that purpose.

Wednesday, December 18, 2019

Favorite sentences

I have to pass on a few of the favorite sentences collected by book critic Dwight Garner from his 2019 list of books - you can get the citations from the article.
“Watch for the glamorous sentence that appears from nowhere — it might have plans for you.”
“If you don’t know the exact moment when the lights will go out, you might as well read until they do.”
“The one good thing about national anthems is that we’re already on our feet, and therefore ready to run.”
“How’re you doing,” a character asked in Ali Smith’s novel “Spring,” “apart from the end of liberal capitalist democracy?”
“Take a simpleton and give him power and confront him with intelligence — and you have a tyrant.”
In Robert Menasse’s sophisticated novel “The Capital,” set in Brussels, a character watched old nationalist ghosts rise in a tabloid culture, and commented: “He had been prepared for everything, but not everything in caricature.”...also from this novel...“Back in 1914, his grandfather had said, Brussels was the richest and most beautiful city in the world — then they came three times, twice in their boots with rifles, the third time in their trainers with cameras.”
Or, to let it all pass by...
Nabokov told an interviewer in 1974, “I don’t even know who Mr. Watergate is.”
And, I will add a favorite sentence of my own, from Pinker’s guide to writing “Sense of Style..”
"The key to good style, far more than obeying any list of commandments, is to have a clear conception of the make-believe world in which you’re pretending to communicate."

Monday, November 18, 2019

Social class is revealed by brief clips of speech.

Kraus et al. - a collective modern version of Professor Henry Higgins in George Bernard Shaw's play Pygmalion - offer a detailed analytic update on how social class is reproduced through subtle cues expressed in brief speech. Here is their abstract:
Economic inequality is at its highest point on record and is linked to poorer health and well-being across countries. The forces that perpetuate inequality continue to be studied, and here we examine how a person’s position within the economic hierarchy, their social class, is accurately perceived and reproduced by mundane patterns embedded in brief speech. Studies 1 through 4 examined the extent that people accurately perceive social class based on brief speech patterns. We find that brief speech spoken out of context is sufficient to allow respondents to discern the social class of speakers at levels above chance accuracy, that adherence to both digital and subjective standards for English is associated with higher perceived and actual social class of speakers, and that pronunciation cues in speech communicate social class over and above speech content. In study 5, we find that people with prior hiring experience use speech patterns in preinterview conversations to judge the fit, competence, starting salary, and signing bonus of prospective job candidates in ways that bias the process in favor of applicants of higher social class. Overall, this research provides evidence for the stratification of common speech and its role in both shaping perceiver judgments and perpetuating inequality during the briefest interactions.
Here is a sample explanatory clip from their results section:
A total of 229 perceivers were asked to listen to the speech of 27 unique speakers whose utterances were collected as part of a larger sample of 189 speakers through the International Dialects of English Archive (IDEA). These 27 speakers varied in terms of age, race, gender, and social class, which we measured in the present study in terms of high school or college degree attainment. Our sample of perceivers listened to 7 words spoken by each of the speakers presented consecutively and randomly without any other accompanying speech and answered “Yes” or “No” to 4 questions: “Is this person a college graduate/woman/young/white?” Participants answered these 4 questions in a randomized order, and we calculated the proportion of correct responses for each question...

Friday, September 20, 2019

Gender neutral pronouns reduce bias in favor of traditional gender roles

An interesting study by Tavits and Pérez on Sweden's 2015 incorporation into the Swedish Academy Glossary (which sets norms for Sweden's language) of the gender-neutral pronoun hen. A majority of Swedes now use hen alongside the explicitly gendered hon (she) and han (he) as part of their grammatical toolkit.

Significance
Evidence from 3 survey experiments traces the effects of gender-neutral pronoun use on mass judgments of gender equality and tolerance toward lesbian, gay, bisexual, and transgender (LGBT) communities. The results establish that individual use of gender-neutral pronouns reduces the mental salience of males. This shift is associated with people expressing less bias in favor of traditional gender roles and categories, as manifested in more positive attitudes toward women and LGBT individuals in public affairs.
Abstract
To improve gender equality and tolerance toward lesbian, gay, bisexual, and transgender (LGBT) communities, several nations have promoted the use of gender-neutral pronouns and words. Do these linguistic devices actually reduce biases that favor men over women, gays, lesbians, and transgender individuals? The current article explores this question with 3 large-scale experiments in Sweden, which formally incorporated a gender-neutral pronoun into its language alongside established gendered pronouns equivalent to he and she. The evidence shows that compared with masculine pronouns, use of gender-neutral pronouns decreases the mental salience of males. This shift is associated with individuals expressing less bias in favor of traditional gender roles and categories, as reflected in more favorable attitudes toward women and LGBT individuals in public life. Additional analyses reveal similar patterns for feminine pronouns. The influence of both pronouns is more automatic than controlled.

Monday, June 17, 2019

Why can we read only one word at a time?

Fascinating work from White et al.:

Significance
Because your brain has limited processing capacity, you cannot comprehend the text on this page all at once. In fact, skilled readers cannot even recognize just two words at once. We measured how the visual areas of the brain respond to pairs of words while participants attended to one word or tried to divide attention between both. We discovered that a single word-selective region in left ventral occipitotemporal cortex processes both words in parallel. The parallel streams of information then converge at a bottleneck in an adjacent, more anterior word-selective region. This result reveals the functional significance of subdivisions within the brain’s reading circuitry and offers a compelling explanation for a profound limit on human perception.
Abstract
In most environments, the visual system is confronted with many relevant objects simultaneously. That is especially true during reading. However, behavioral data demonstrate that a serial bottleneck prevents recognition of more than one word at a time. We used fMRI to investigate how parallel spatial channels of visual processing converge into a serial bottleneck for word recognition. Participants viewed pairs of words presented simultaneously. We found that retinotopic cortex processed the two words in parallel spatial channels, one in each contralateral hemisphere. Responses were higher for attended than for ignored words but were not reduced when attention was divided. We then analyzed two word-selective regions along the occipitotemporal sulcus (OTS) of both hemispheres (subregions of the visual word form area, VWFA). Unlike retinotopic regions, each word-selective region responded to words on both sides of fixation. Nonetheless, a single region in the left hemisphere (posterior OTS) contained spatial channels for both hemifields that were independently modulated by selective attention. Thus, the left posterior VWFA supports parallel processing of multiple words. In contrast, activity in a more anterior word-selective region in the left hemisphere (mid OTS) was consistent with a single channel, showing (i) limited spatial selectivity, (ii) no effect of spatial attention on mean response amplitudes, and (iii) sensitivity to lexical properties of only one attended word. Therefore, the visual system can process two words in parallel up to a late stage in the ventral stream. The transition to a single channel is consistent with the observed bottleneck in behavior.

Monday, December 10, 2018

The coding of perception in language is not universal.

From Majid et al.:
Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.

Tuesday, October 09, 2018

Sans Forgetica

A fascinating piece from Taylor Telford in The Washington Post describes a new font devised by psychology and design researchers at RMIT Univ. in Melbourne...
...designed to boost information retention for readers. It’s based on a theory called “desirable difficulty,” which suggests that people remember things better when their brains have to overcome minor obstacles while processing information. Sans Forgetica is sleek and back-slanted with intermittent gaps in each letter, which serve as a “simple puzzle” for the reader...The back-slanting in Sans Forgetica would be foreign to most readers...The openings in the letters make the brain pause to identify the shapes.
It may be my imagination, but I feel my brain perking up, working harder, to take in theis graphic:


The team tested the font’s efficacy along with other intentionally complicated fonts on 400 students in lab and online experiments and found that “Sans Forgetica broke just enough design principles without becoming too illegible and aided memory retention.

Monday, July 30, 2018

Piano training enhances speech perception.

Fascinating work from an international collaboration of Desimone at M.I.T., Nan at Beijing Normal Univ., and others:

Significance
Musical training is beneficial to speech processing, but this transfer’s underlying brain mechanisms are unclear. Using pseudorandomized group assignments with 74 4- to 5-year-old Mandarin-speaking children, we showed that, relative to an active control group which underwent reading training and a no-contact control group, piano training uniquely enhanced cortical responses to pitch changes in music and speech (as lexical tones). These neural enhancements further generalized to early literacy skills: Compared with the controls, the piano-training group also improved behaviorally in auditory word discrimination, which was correlated with their enhanced neural sensitivities to musical pitch changes. Piano training thus improves children’s common sound processing, facilitating certain aspects of language development as much as, if not more than, reading instruction.
Abstract
Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used event-related potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4–5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.