Hutson points to work of
Garg et al. that uses artificial intelligence to demonstrate how racial and gender stereotypes have changed over time.
They designed their program to use word embeddings, strings of numbers that represent a word’s meaning based on its appearance next to other words in large bodies of text. If people tend to describe women as emotional, for example, “emotional” will appear alongside “woman” more frequently than “man,” and word embeddings will pick that up...Going decade by decade, they found that words related to competence—such as “resourceful” and “clever”—were slowly becoming less masculine. But words related to physical appearance—such as “alluring” and ”homely”—were stuck in time...their embeddings were still distinctly “female.”...Asian names became less tightly linked to terms for outsiders...words related to terrorism became more closely associated with words related to Islam...
Here is the Carg et al. abstract:
Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts—e.g., the women’s movement in the 1960s and Asian immigration into the United States—and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.
No comments:
Post a Comment