Showing posts with label language. Show all posts
Showing posts with label language. Show all posts

Friday, December 15, 2017

Teaching A.I. to explain itself

An awkward feature of the artificial intelligence, or machine learning, algorithms that teach themselves to translate languages, analyze X-ray images and mortgage loans, judge probability of behaviors from faces, etc., is that we are unable to discern exactly what they are doing as they perform these functions. How can we we trust these machine unless they can explain themselves? This issue is the subject of an interesting piece by Cliff Kuang. A few clips from the article:
Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand.
A decade in the making, the European Union’s General Data Protection Regulation finally goes into effect in May 2018. It’s a sprawling, many-tentacled piece of legislation whose opening lines declare that the protection of personal data is a universal human right. Among its hundreds of provisions, two seem aimed squarely at where machine learning has already been deployed and how it’s likely to evolve. Google and Facebook are most directly threatened by Article 21, which affords anyone the right to opt out of personally tailored ads. The next article then confronts machine learning head on, limning a so-called right to explanation: E.U. citizens can contest “legal or similarly significant” decisions made by algorithms and appeal for human intervention. Taken together, Articles 21 and 22 introduce the principle that people are owed agency and understanding when they’re faced by machine-made decisions.
To create a neural net that can reveal its inner workings...researchers...are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision. (The idea is partly inspired by psychological studies of real-life experts like firefighters, who don’t clock in for a shift thinking, These are the 12 rules for fighting fires; when they see a fire before them, they compare it with ones they’ve seen before and act accordingly.) Perhaps the most ambitious of the dozen different projects are those that seek to bolt new explanatory capabilities onto existing deep neural networks. Imagine giving your pet dog the power of speech, so that it might finally explain what’s so interesting about squirrels. Or, as Trevor Darrell, a lead investigator on one of those teams, sums it up, “The solution to explainable A.I. is more A.I.”
... a novel idea for letting an A.I. teach itself how to describe the contents of a picture...create two deep neural networks: one dedicated to image recognition and another to translating languages. ...they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.

Friday, November 17, 2017

The emotional intelligence of one- to four-year-olds

Interesting work from Wu et al. showing young children connect diverse positive emotional vocalizations to their probable causes, showing more sophisticated emotion understanding than previously realized:
The ability to understand why others feel the way they do is critical to human relationships. Here, we show that emotion understanding in early childhood is more sophisticated than previously believed, extending well beyond the ability to distinguish basic emotions or draw different inferences from positively and negatively valenced emotions. In a forced-choice task, 2- to 4-year-olds successfully identified probable causes of five distinct positive emotional vocalizations elicited by what adults would consider funny, delicious, exciting, sympathetic, and adorable stimuli (Experiment 1). Similar results were obtained in a preferential looking paradigm with 12- to 23-month-olds, a direct replication with 18- to 23-month-olds (Experiment 2), and a simplified design with 12- to 17-month-olds (Experiment 3). Moreover, 12- to 17-month-olds selectively explored, given improbable causes of different positive emotional reactions (Experiments 4 and 5). The results suggest that by the second year of life, children make sophisticated and subtle distinctions among a wide range of positive emotions and reason about the probable causes of others’ emotional reactions. These abilities may play a critical role in developing theory of mind, social cognition, and early relationships.

Tuesday, November 14, 2017

How linguistic metaphor scaffolds reasoning

Continuing the line of inquiry pioneered by Lakoff and Johnson's 1980 book, "Metaphors We Live by", Thibodeau et al. provide further examples of how the use of metaphor shapes our thoughts. I'm passing on their summary, and also two figures.

Abstract
Language helps people communicate and think. Precise and accurate language would seem best suited to achieve these goals. But a close look at the way people actually talk reveals an abundance of apparent imprecision in the form of metaphor: ideas are ‘light bulbs’, crime is a ‘virus’, and cancer is an ‘enemy’ in a ‘war’. In this article, we review recent evidence that metaphoric language can facilitate communication and shape thinking even though it is literally false. We first discuss recent experiments showing that linguistic metaphor can guide thought and behavior. Then we explore the conditions under which metaphors are most influential. Throughout, we highlight theoretical and practical implications, as well as key challenges and opportunities for future research.
Trends
Metaphors pervade discussions of abstract concepts and complex issues: ideas are ‘light bulbs’, crime is a ‘virus’, and cancer is an ‘enemy’ in a ‘war’.
At a process level, metaphors, like analogies, involve structure mapping, in which relational structure from the source domain is leveraged for thinking about the target domain.
Metaphors influence how people think about the topics they describe by shaping how people attend to, remember, and process information.
The effects of metaphor on reasoning are not simply the result of lexical priming.
Metaphors can covertly influence how people think. That is, people are not always aware that they have been influenced by a metaphor.
Two figures (click to enlarge):





Wednesday, September 27, 2017

“No problem” vs “you’re welcome”

My daughter pointed me to this piece by Gretchen McCulloch, which gives me some insight into what I have considered the annoying habit of younger people to always say 'no problem' instead of 'you're welcome.' A clip:
Speaking of linguistics, there’s one particular linguistic tick that I think clearly separates Baby Boomers from Millennials: how we reply when someone says “thank you.”
You almost never hear a Millennial say “you’re welcome.” At least not when someone thanks them. It just isn’t done. Not because Millennials are ingrates lacking all manners, but because the polite response is “No problem.” Millennials only use “you’re welcome” sarcastically when they haven’t been thanked or when something has been taken from/done to them without their consent. It’s a phrase that’s used to point out someone else’s rudeness. A Millennial would typically be fairly uncomfortable saying “you’re welcome” as an acknowledgement of genuine thanks because the phrase is only ever used disingenuously.
Baby Boomers, however, get really miffed if someone says “no problem” in response to being thanked. From their perspective, saying “no problem” means that whatever they’re thanking someone for was in fact a problem, but the other person did it anyway as a personal favor. To them “You’re welcome” is the standard polite response.
“You’re welcome” means to Millennials what “no problem” means to Baby Boomers, and vice versa.The two phrases have converse meanings to the different age sets. I’m not sure exactly where this line gets drawn, but it’s somewhere in the middle of Gen X. This is a real pain in the ass if you work in customer service because everyone thinks that everyone else is being rude when they’re really being polite in their own language.

Friday, September 22, 2017

Color naming across languages reflects color use

Gibson et al. do a study showing that warm colors are communicated more efficiently than cool colors, and that this cross-linguistic pattern reflects the color statistics of the world:

Significance
The number of color terms varies drastically across languages. Yet despite these differences, certain terms (e.g., red) are prevalent, which has been attributed to perceptual salience. This work provides evidence for an alternative hypothesis: The use of color terms depends on communicative needs. Across languages, from the hunter-gatherer Tsimane' people of the Amazon to students in Boston, warm colors are communicated more efficiently than cool colors. This cross-linguistic pattern reflects the color statistics of the world: Objects (what we talk about) are typically warm-colored, and backgrounds are cool-colored. Communicative needs also explain why the number of color terms varies across languages: Cultures vary in how useful color is. Industrialization, which creates objects distinguishable solely based on color, increases color usefulness.
Abstract
What determines how languages categorize colors? We analyzed results of the World Color Survey (WCS) of 110 languages to show that despite gross differences across languages, communication of chromatic chips is always better for warm colors (yellows/reds) than cool colors (blues/greens). We present an analysis of color statistics in a large databank of natural images curated by human observers for salient objects and show that objects tend to have warm rather than cool colors. These results suggest that the cross-linguistic similarity in color-naming efficiency reflects colors of universal usefulness and provide an account of a principle (color use) that governs how color categories come about. We show that potential methodological issues with the WCS do not corrupt information-theoretic analyses, by collecting original data using two extreme versions of the color-naming task, in three groups: the Tsimane', a remote Amazonian hunter-gatherer isolate; Bolivian-Spanish speakers; and English speakers. These data also enabled us to test another prediction of the color-usefulness hypothesis: that differences in color categorization between languages are caused by differences in overall usefulness of color to a culture. In support, we found that color naming among Tsimane' had relatively low communicative efficiency, and the Tsimane' were less likely to use color terms when describing familiar objects. Color-naming among Tsimane' was boosted when naming artificially colored objects compared with natural objects, suggesting that industrialization promotes color usefulness.

Friday, July 28, 2017

Different languages use different systems of nerve cells.

From Xu et al.:
A large body of previous neuroimaging studies suggests that multiple languages are processed and organized in a single neuroanatomical system in the bilingual brain, although differential activation may be seen in some studies because of different proficiency levels and/or age of acquisition of the two languages. However, one important possibility is that the two languages may involve interleaved but functionally independent neural populations within a given cortical region, and thus, distinct patterns of neural computations may be pivotal for the processing of the two languages. Using functional magnetic resonance imaging (fMRI) and multivariate pattern analyses, we tested this possibility in Chinese-English bilinguals when they performed an implicit reading task. We found a broad network of regions wherein the two languages evoked different patterns of activity, with only partially overlapping patterns of voxels in a given region. These regions, including the middle occipital cortices, fusiform gyri, and lateral temporal, temporoparietal, and prefrontal cortices, are associated with multiple aspects of language processing. The results suggest the functional independence of neural computations underlying the representations of different languages in bilinguals.

Friday, July 14, 2017

Politics and the English Language - George Orwell

The 1946 essay by George Orwell with the title of this post was recently discussed by the Chaos and Complex Systems seminar group that I attend at the University of Wisconsin. Orwell’s comments on the abuse of language (meaningless words, dying metaphors, pretentious diction, etc.) are an apt description of language in today’s Trumpian world. Some rules with which he ends his essay:
1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print. 
2. Never use a long word where a short one will do. 
3. If it is possible to cut a word out, always cut it out. 
4. Never use the passive where you can use the active. 
5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent

Friday, June 16, 2017

Watching our brains construct linguistic phrases

From Nelson et al.:

Significance
According to most linguists, the syntactic structure of sentences involves a tree-like hierarchy of nested phrases, as in the sentence [happy linguists] [draw [a diagram]]. Here, we searched for the neural implementation of this hypothetical construct. Epileptic patients volunteered to perform a language task while implanted with intracranial electrodes for clinical purposes. While patients read sentences one word at a time, neural activation in left-hemisphere language areas increased with each successive word but decreased suddenly whenever words could be merged into a phrase. This may be the neural footprint of “merge,” a fundamental tree-building operation that has been hypothesized to allow for the recursive properties of human language.
Abstract
Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.

Friday, May 12, 2017

Semantics and the science of fear - the amygdala doesn't 'cause' fear.

Here are some core clips from an article in which Joseph Ledoux updates an idea he proposed several decades ago:
…that objectively measurable behavioral and physiological responses elicited by emotional stimuli were controlled nonconsciously by subcortical circuits, such as those involving the amygdala, while the conscious emotional experience was the result of cortical (mostly prefrontal) circuits that contribute to working memory and related higher cognitive functions. Building on a distinction emerging in the study of memory, I referred to these as implicit (nonconscious) and explicit (conscious) fear circuits
He has come to realize:
...that the implicit–explicit distinction had less traction in the case of emotions than in memory. The vernacular meaning of emotion words is simply too strong. When we hear the word ‘fear’, the default interpretation is the conscious experience of being in danger, and this meaning dominates. For example, although I consistently emphasized that the amygdala circuits operate nonconsciously, I was often described in both lay and scientific contexts as having shown how feelings of fear emerge from the amygdala. Even researchers working in the objective tradition sometimes appear confused about what they mean by fear; papers in the field commonly refer to ‘frightened rats’ that ‘freeze in fear’. A naïve reader naturally thinks of frightened rats as feeling ‘fear’. As noted above, using mental state terms to describe the function of brain circuits infects the circuit with surplus meaning (psychological properties of the mental state) and confusion invariably results.
Recently, I have … abandoned the implicit–explicit fear approach in favor of a conception that restricts the use of mental state terms to conscious mental states. I now only use ‘fear’ to refer to the experience of fear. It is common these days to argue that folk psychological ideas will be replaced with more accurate scientific constructs as the field matures. Indeed, for nonsubjective brain functions, subjective state labels should be eliminated. This is what I had in mind when I proposed calling the amygdala circuit a defensive survival circuit instead of a fear circuit (see Figure). However, the language of folk psychology describes conscious experiences, such as fear, just fine.



Figure - The Two-Circuit View of Threat Processing and the Experience of Fear. (A) In the two-circuit model, threats are processed in parallel by subcortical and cortical circuits. A subcortical defensive survival circuit centered on the amygdala initiates defensive behaviors in response to threats, while a cortical (mostly prefrontal) cognitive circuit underlying working memory gives rise to the conscious experience of fear. In many situations, survival circuit activity also contributes, albeit indirectly, to fearful feelings. (B) Conscious feelings of fear are proposed to emerge in the cortical circuit as a result of information integration in working memory, including information about sensory and various memory representations, as well as information from survival and arousal circuit activity within the brain, and feedback from body responses.
Psychology is different from other sciences, and has hurdles that they lack. Atoms do not study atoms, but minds study mental states and behaviors. When we engage in psychological research, we must take care to account for the prominent role of subjective experiences in our lives, while also being careful not to attribute subjective causes to behaviors controlled nonconsciously. Conflation of behavioral control circuits with subjective states by indiscriminate use of subjective state terms for both behavioral control circuits and conscious experiences is not a problem restricted to fear. It is present in many areas, including motivation, reward, pain, perception, and memory, to name a few obvious ones. Fear researchers, by addressing this issue, might well set an example that also paves the way for crisper conceptions in other areas of research.




Tuesday, April 25, 2017

Reading what the mind thinks from how the eye sees.

Expressive eye widening (as in fear) and eye narrowing (as in disgust) are associated with opposing optical consequences and serve opposing perceptual functions. Lee and Anderson suggest that the opposing effects of eye widening and narrowing on the expresser’s visual perception have been socially co-opted to denote opposing mental states of sensitivity and discrimination, respectively, such that opposing complex mental states may originate from this simple perceptual opposition. Their abstract:
Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.

Friday, February 10, 2017

Kind words in language - changes over time

John Carson does a nice precis of Iliev et al.:
It is debated whether linguistic positivity bias (LPB) — the cross-cultural tendency to use more positive words than negative — results from a common cognitive underpinning or our environmental and cultural context.
Rumen Iliev, from the University of Michigan, and colleagues tackle the theoretical stalemate by looking at changes in positive word usage within a language over time. They use time-stamped texts from Google Books Ngrams and the New York Times to analyse LPB trends in American English over the last 200 years. They show that LPB has declined overall since 1800, which discounts the importance of universal cognition and, they suggest, aligns most strongly with a decline in social cohesion and prosociality in the United States. They find a significant association between LPB and casualty levels in war, economic performance, and measures of public happiness, suggesting that objective circumstances and subjective public mood drive its dynamics.
Analysing time-stamped historical texts is a powerful way to investigate evolving behaviours. The next step will be to look across other languages and historical events and tease apart the contribution of different contextual factors to LPB.

Friday, January 06, 2017

Dual streams of speech processing.

A large number of studies have documented how visual information in the brain is processed in dual streams of information: dorsal (where is it?), and ventral (what is it?). Fridriksson et al. have now applied a dual route model to speech processing that distinguishes form to meaning from form to articulation processing, and  I pass on their abstract plus one graphic showing the brain regions they are dealing with:
Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor–phonological aspects vs. lexical–semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal–frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical–semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.


Component 1 (Form-to-meaning processing necessary for single word and sentence comprehension, also reversed (meaning-to-form processing) to support lexical–semantic aspects of speech production) is represented in Left, Component 2 (form-to-articulation processing) is represented in Center (Component 2a), and Component 2 modulated by a lesion component derived from lesion maps is represented in Right (Component 2b).

Monday, September 19, 2016

Defining brain areas involved in music perception.

From Sihvonen et al:
Although acquired amusia is a relatively common disorder after stroke, its precise neuroanatomical basis is still unknown. To evaluate which brain regions form the neural substrate for acquired amusia and its recovery, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study with 77 human stroke subjects. Structural MRIs were acquired at acute and 6 month poststroke stages. Amusia and aphasia were behaviorally assessed at acute and 3 month poststroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA) and language tests. VLSM analyses indicated that amusia was associated with a lesion area comprising the superior temporal gyrus, Heschl's gyrus, insula, and striatum in the right hemisphere, clearly different from the lesion pattern associated with aphasia. Parametric analyses of MBEA Pitch and Rhythm scores showed extensive lesion overlap in the right striatum, as well as in the right Heschl's gyrus and superior temporal gyrus. Lesions associated with Rhythm scores extended more superiorly and posterolaterally. VBM analysis of volume changes from the acute to the 6 month stage showed a clear decrease in gray matter volume in the right superior and middle temporal gyri in nonrecovered amusic patients compared with nonamusic patients. This increased atrophy was more evident in anterior temporal areas in rhythm amusia and in posterior temporal and temporoparietal areas in pitch amusia. Overall, the results implicate right temporal and subcortical regions as the crucial neural substrate for acquired amusia and highlight the importance of different temporal lobe regions for the recovery of amusia after stroke.

Friday, September 16, 2016

Predicting false memories with fMRI

Chadwick et al. find that the the apex of the ventral processing stream in the brain's temporal pole (TP) contains partially overlapping neural representations of related concepts, and that the extent of this neural overlap directly reflects the degree of semantic similarity between the concepts. Furthermore, the neural overlap between sets of related words predicts the likelihood of making a false-memory error. (One could wonder if further development of work of this sort might make it possible to perform an fMRI evaluation of an eye witness in an important trial to determine whether their testimony is more or less likely to be correct.)

Significance
False memories can arise in daily life through a mixture of factors, including misinformation and prior conceptual knowledge. This can have serious consequences in settings, such as legal eyewitness testimony, which depend on the accuracy of memory. We investigated the brain basis of false memory with fMRI, and found that patterns of activity in the temporal pole region of the brain can predict false memories. Furthermore, we show that each individual has unique patterns of brain activation that can predict their own idiosyncratic set of false-memory errors. Together, these results suggest that the temporal pole may be responsible for the conceptual component of illusory memories.
Abstract
Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the “semantic hub” of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories.

Tuesday, August 16, 2016

The long lives of fairy tales.

I pass on some clips from a review by Pagel of work by Da Silva and Tehrani suggesting that some common fairy tales can be traced back 7,000 years or more, long before written languages appeared.
The Indo-European language family is a collection of related languages that probably arose in Anatolia and is now spoken all over western Eurasia. Its modern descendants include the Celtic, Germanic and Italic or Romance languages of western Europe, the Slavic languages of Russia and much of the Balkans, and the Indo-Iranian languages including Persian, as well as Sanskrit and most of the languages of the Indian sub-continent.
Language evolves faster than genes and language is predominantly vertically transmitted. Similarities and differences among vocabulary items, then, play the same role for cultural phylogenies as genes do for species trees, and provide greater resolution over short timescales. The Indo-European language tree is one of the most carefully studied of these language phylogenies
With a phylogenetic tree in hand, the authors recorded the presence or absence of each of 275 fairy tales in fifty Indo-European languages...Of the 275 tales, the authors discarded 199 after performing two tests of horizontal transmission...This left a group of 76 tales for which vertical transmission over the course of Indo-European history was the dominant signal for the patterns of shared presence and absence among contemporary societies. Hänsel and Gretel didn’t make this cut, but Beauty and the Beast did.
Evolutionary statistical methods were then applied to calculate a probability that each of the tales was present at each of various major historical splitting points on the Indo-European language phylogeny, taking account of uncertainty both in the phylogeny and in the reconstructed state. Calculating the ancestral probabilities depends only upon the distribution of tales in the contemporary languages in combination with the phylogenetic tree and so neatly gets around the problem that few if any tales exist as ‘fossil’ texts...Fourteen of the 76 tales, including Beauty and the Beast, were assigned a 50% or greater chance of having been present in the common ancestor of the entire western branch of the Indo-European languages. ..
A further four of the fourteen tales — but not Beauty and the Beast — had a 50% or greater probability of being present at the root of the Indo-European tree. A proto-Indo-European origin for these four tales represents a probable age of over 7,000 years. The tale with the highest probability (87%) of being present at the root was The Smith and the Devil whose story of a smith selling his soul to the devil is echoed today in the modern story of Faust. The authors suggest that metal working technology — as implied by the presence of a smith — could have been available this long ago.
Considering all these notions might lead us to ask why not more of the fairy tales appeared right back at the Indo-European root, or perhaps to wonder if some could go back even further. Perhaps some do. Flood myths appear in many of the world’s cultures, with some speculation that they date to the end of the last Ice Age perhaps 15,000 to 20,000 years ago when sea levels rose dramatically — if true, the western Bible story of Noah is just a comparatively recent hand-me-down.

Tuesday, July 12, 2016

A non-linguistic brain network for mathematics.

From Amalrica and Dehaene:
The origins of human abilities for mathematics are debated: Some theories suggest that they are founded upon evolutionarily ancient brain circuits for number and space and others that they are grounded in language competence. To evaluate what brain systems underlie higher mathematics, we scanned professional mathematicians and mathematically naive subjects of equal academic standing as they evaluated the truth of advanced mathematical and nonmathematical statements. In professional mathematicians only, mathematical statements, whether in algebra, analysis, topology or geometry, activated a reproducible set of bilateral frontal, Intraparietal, and ventrolateral temporal regions. Crucially, these activations spared areas related to language and to general-knowledge semantics. Rather, mathematical judgments were related to an amplification of brain activity at sites that are activated by numbers and formulas in nonmathematicians, with a corresponding reduction in nearby face responses. The evidence suggests that high-level mathematical expertise and basic number sense share common roots in a nonlinguistic brain circuit.

Wednesday, June 22, 2016

Smartphone Era Politics

Selections from one of Roger Cohen's many intelligent essays in The New York Times:
The time has come for a painful confession: I have spent my life with words, yet I am illiterate...I do not have the words to be at ease in this world of steep migration from desktop to mobile, of search-engine optimization, of device-agnostic bundles, of cascading metrics and dashboards and buckets, of post-print onboarding and social-media FOMO (fear of missing out).
I was more at home with the yarn du jour. Jour was once an apt first syllable for the word journalism; hour would now be more appropriate...That was in the time of distance. Disconnection equaled immersion. Today, connection equals distraction...
We find ourselves at a pivot point. How we exist in relation to one another is in the midst of radical redefinition, from working to flirting. The smartphone is a Faustian device, at once liberation and enslavement. It frees us to be anywhere and everywhere — and most of all nowhere. It widens horizons. It makes those horizons invisible. Upright homo sapiens, millions of years in the making, has yielded in a decade to the stooped homo sapiens of downward device-dazzled gaze.
Perhaps this is how the calligrapher felt after 1440, when it began to be clear what Gutenberg had wrought. A world is gone. Another, as poor Jeb Bush (!) discovered, is being born — one where words mean everything and the contrary of everything, where sentences have lost their weight, where volume drowns truth.
You have to respect American voters. They are changing the lexicon in their anger with the status quo. They don’t care about consistency. They care about energy. Reasonableness dies. Provocation works. Whether you are for or against something, or both at the same time, is secondary to the rise your position gets. Our times are unpunctuated. Politics, too, has a new language, spoken above all by the Republican front-runner as he repeats that, “There is something going on.”...This appears to be some form of addictive delirium. It is probably dangerous in some still unknowable way.
Technology has upended not only newspapers. It has upended language itself, which is none other than a community’s system of communication. What is a community today? (One thing young people don't do on their smartphones is actually talk to each other.) Can there be community at all with downward gazes? I am not sure. But I am certain that cross-platform content has its beauty and its promise if only I could learn the right words to describe them.

Thursday, May 26, 2016

Culture shapes the evolution of cognition.

From Thompson et al.:
A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Wednesday, May 04, 2016

Semantic maps in our brains - and some interactive graphics

Huth et al. have performed functional MRI on subjects listening to hours of narrative stories to find semantic domains that seem to be consistent across individuals. This interactive 3D viewer (a preliminary version with limited data that takes a while to download and requires a fairly fast computer) shows a color coding of areas with different semantic selectivities (body part, person, place, time, outdoor, visual, tactile, violence, etc.) Here is their Nature abstract:
The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.

Monday, February 08, 2016

Our digital presence.

Here is an interesting tally: Active users of social media sites compared with the populations of the world's largest countries. if social media sites were countries, Facebook would be the world’s largest country with more active accounts than there are people in China. Twitter would rank 4th with twice the “population” of the USA and Instagram would round out the Top 10.

There are ~7.4 billion people in the world,  ~43% are connected to the internet; ~4 billion, mostly in the developing world, lack internet access. Most pundits expect that by 2025, digital access will have spread to 80% of all people.