Thursday, May 31, 2018

A.I. needs to learn like a human child.

Matthew Hutson summarizes efforts to nudge machine learning researchers away from the assumption that "computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules." Some clips from his article:
In February, MIT launched Intelligence Quest, a research initiative now raising hundreds of millions of dollars to understand human intelligence in engineering terms. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct. They will boot up following some embedded rules, but will also learn as they go.
Part of the quest will be to discover what babies know and when—lessons that can then be applied to machines. That will take time, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. AI2 recently announced a $125 million effort to develop and test common sense in AI. "We would love to build on the representational structure innate in the human brain," Etzioni says, "but we don't understand how the brain processes language, reasoning, and knowledge."
Harvard University psychologist Elizabeth Spelke has argued that we have at least four "core knowledge" systems giving us a head start on understanding objects, actions, numbers, and space. We are intuitive physicists, for example, quick to understand objects and their interactions...Gary Marcus has composed a minimum list of 10 human instincts that he believes should be baked into AIs, including notions of causality, cost-benefit analysis, and types versus instances (dog versus my dog).
The debate over where to situate an AI on a spectrum between pure learning and pure instinct will continue. But that issue overlooks a more practical concern: how to design and code such a blended machine. How to combine machine learning—and its billions of neural network parameters—with rules and logic isn't clear. Neither is how to identify the most important instincts and encode them flexibly. But that hasn't stopped some researchers and companies from trying.
...researchers are working to inject their AIs with the same intuitive physics that babies seem to be born with. Computer scientists at DeepMind in London have developed what they call interaction networks. They incorporate an assumption about the physical world: that discrete objects exist and have distinctive interactions. Just as infants quickly parse the world into interacting entities, those systems readily learn objects' properties and relationships. Their results suggest that interaction networks can predict the behavior of falling strings and balls bouncing in a box far more accurately than a generic neural network.
Vicarious, a robotics software company in San Francisco, California, is taking the idea further with what it calls schema networks. Those systems, too, assume the existence of objects and interactions, but they also try to infer the causality that connects them. By learning over time, the company's software can plan backward from desired outcomes, as people do. (I want my nose to stop itching; scratching it will probably help.) The researchers compared their method with a state-of-the-art neural network on the Atari game Breakout, in which the player slides a paddle to deflect a ball and knock out bricks. Because the schema network could learn about causal relationships—such as the fact that the ball knocks out bricks on contact no matter its velocity—it didn't need extra training when the game was altered. You could move the target bricks or make the player juggle three balls, and the schema network still aced the game. The other network flailed.
Besides our innate abilities, humans also benefit from something most AIs don't have: a body. To help software reason about the world, Vicarious is "embodying" it so it can explore virtual environments, just as a baby might learn something about gravity by toppling a set of blocks. In February, Vicarious presented a system that looked for bounded regions in 2D scenes by essentially having a tiny virtual character traverse the terrain. As it explored, the system learned the concept of containment, which helps it make sense of new scenes faster than a standard image-recognition convnet that passively surveyed each scene in full. Concepts—knowledge that applies to many situations—are crucial for common sense. "In robotics it's extremely important that the robot be able to reason about new situations," says Dileep George, a co-founder of Vicarious. Later this year, the company will pilot test its software in warehouses and factories, where it will help robots pick up, assemble, and paint objects before packaging and shipping them.
One of the most challenging tasks is to code instincts flexibly, so that AIs can cope with a chaotic world that does not always follow the rules. Autonomous cars, for example, cannot count on other drivers to obey traffic laws. To deal with that unpredictability, Noah Goodman, a psychologist and computer scientist at Stanford University in Palo Alto, California, helps develop probabilistic programming languages (PPLs). He describes them as combining the rigid structures of computer code with the mathematics of probability, echoing the way people can follow logic but also allow for uncertainty: If the grass is wet it probably rained—but maybe someone turned on a sprinkler. Crucially, a PPL can be combined with deep learning networks to incorporate extensive learning. While working at Uber, Goodman and others invented such a "deep PPL," called Pyro. The ride-share company is exploring uses for Pyro such as dispatching drivers and adaptively planning routes amid road construction and game days. Goodman says PPLs can reason not only about physics and logistics, but also about how people communicate, coping with tricky forms of expression such as hyperbole, irony, and sarcasm.

Wednesday, May 30, 2018

Intelligent brains have more sparse and efficient nerve connections.

From Genç et al.:
Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.

Tuesday, May 29, 2018

The light and dark sides of friendship.

I want to point to two brief reviews by Natalie Angier on friendship. She first points to work of Parkinson et al. showing similarities in the brain activities of friends. The Parkinson et al. abstract:
Human social networks are overwhelmingly homophilous: individuals tend to befriend others who are similar to them in terms of a range of physical attributes (e.g., age, gender). Do similarities among friends reflect deeper similarities in how we perceive, interpret, and respond to the world? To test whether friendship, and more generally, social network proximity, is associated with increased similarity of real-time mental responding, we used functional magnetic resonance imaging to scan subjects’ brains during free viewing of naturalistic movies. Here we show evidence for neural homophily: neural responses when viewing audiovisual movies are exceptionally similar among friends, and that similarity decreases with increasing distance in a real-world social network. These results suggest that we are exceptionally similar to our friends in how we perceive and respond to the world around us, which has implications for interpersonal influence and attraction.
Angier also notes work showing that the other side of homophily, or friendship, can be the urge to "otherize" those who differ from you and your friends.
...the study from the University of Michigan had subjects stand outside on a cold winter day and read a brief story about a hiker who was described as either a “left-wing, pro-gay-rights Democrat” or a “right-wing, anti-gay-rights Republican.” When asked whether the hypothetical hiker might feel chilly as well, participants were far more likely to say yes if the protagonist’s political affiliation agreed with their own. But a political adversary — does that person even have skin, let alone a working set of thermal sensors?
The abstract of that study:
What people feel shapes their perceptions of others. In the studies reported here, we examined the assimilative influence of visceral states on social judgment. Replicating prior research, we found that participants who were outside during winter overestimated the extent to which other people were bothered by cold (Study 1), and participants who ate salty snacks without water thought other people were overly bothered by thirst (Study 2). However, in both studies, this effect evaporated when participants believed that the other people under consideration held political views opposing their own. Participants who judged these dissimilar others were unaffected by their own strong visceral-drive states, a finding that highlights the power of dissimilarity in social judgment. Dissimilarity may thus represent a boundary condition for embodied cognition and inhibit an empathic understanding of shared out-group pain. Our findings reveal the need for a better understanding of how people’s internal experiences influence their perceptions of the feelings and experiences of those who may hold values different from their own.

Monday, May 28, 2018

Good things about a party drug (Ketamine).

Ketamine, used extensively as an anesthetic during the Vietnam war and also as a party drug, has a rapidly acting antidepressant effect. A recent clinical trial has shown the antidepressant efficacy of esketamine, the nasal-spray form of the club drug ketamine, suggesting that it might be useful for the rapid treatment of suicidal depression. Conventional antidepressants require 4-6 weeks to be effective. Several labs are studying ketamine's mechanism of action. Yang et al. have found that neuronal burst firing in the lateral habenula, which drives robust depressive-like behaviors, is rapidly blocked by local ketamine infusion. Instead of acting on GABAergic neurons as previously suggested, ketamine blocked glutamatergic neurons in the “anti-reward center” lateral habenula to disinhibit downstream dopaminergic and serotonergic neurons. Widman et al. report that ketamine enhances excitability of pyramidal cells indirectly by reducing synaptic GABAergic inhibition, thus causing disinhibition. They show that only those antagonists with antidepressant efficacy in humans disinhibit pyramidal cells at a clinically relevant concentration, supporting the concept that disinhibition is likely involved in the antidepressant effect of these antagonists.

Friday, May 25, 2018

Why we itch more with aging.

Lewis and Grandl offer context and note the significance of work by by Feng et al.:
It is well known that aging is accompanied by the death of specific cell types that function as sensors of outside signals and that this cell death leads to deficits in our ability to detect these signals. For example, age-associated loss of sensory hair cells and/or spiral ganglia neurons in the inner ear leads to progressive hearing loss, particularly of high frequencies. Similarly, death of photoreceptors in the retina of the eye is a key aspect of the pathogenesis of age-related macular degeneration, the leading cause of vision impairment in individuals older than 60 years of age. Feng et al. now identify an unusual link between age-related loss of a sensory cell type and aberrant sensory processing: During aging, the loss of specialized skin cells called Merkel cells results in alloknesis, the pathological sensation of itch in response to innocuous mechanical stimuli... 
The finding that Merkel cells normally protect against mechanical itch is notable because it is initially counterintuitive. Whereas in other sensory modalities (for example, vision and hearing), a reduction in sensory cell number as a result of cell death leads to a detrimental reduction in sensation, here, death of Merkel cells leads to an increase in unwanted sensation; that is, an otherwise nonaversive stimulus is perceived as potentially harmful.
The Feng et al. abstract:
The somatosensory system relays many signals ranging from light touch to pain and itch. Touch is critical to spatial awareness and communication. However, in disease states, innocuous mechanical stimuli can provoke pathologic sensations such as mechanical itch (alloknesis). The molecular and cellular mechanisms that govern this conversion remain unknown. We found that in mice, alloknesis in aging and dry skin is associated with a loss of Merkel cells, the touch receptors in the skin. Targeted genetic deletion of Merkel cells and associated mechanosensitive Piezo2 channels in the skin was sufficient to produce alloknesis. Chemogenetic activation of Merkel cells protected against alloknesis in dry skin. This study reveals a previously unknown function of the cutaneous touch receptors and may provide insight into the development of alloknesis.

Thursday, May 24, 2018

The microbiome regulates amygdala-dependent fear recall.

Hoban et al. show interaction between the gut microbes and expression of anxiety and fear regulated by the amygdala. Their technical abstract:
The amygdala is a key brain region that is critically involved in the processing and expression of anxiety and fear-related signals. In parallel, a growing number of preclinical and human studies have implicated the microbiome–gut–brain in regulating anxiety and stress-related responses. However, the role of the microbiome in fear-related behaviours is unclear. To this end we investigated the importance of the host microbiome on amygdala-dependent behavioural readouts using the cued fear conditioning paradigm. We also assessed changes in neuronal transcription and post-transcriptional regulation in the amygdala of naive and stimulated germ-free (GF) mice, using a genome-wide transcriptome profiling approach. Our results reveal that GF mice display reduced freezing during the cued memory retention test. Moreover, we demonstrate that under baseline conditions, GF mice display altered transcriptional profile with a marked increase in immediate-early genes (for example, Fos, Egr2, Fosb, Arc) as well as genes implicated in neural activity, synaptic transmission and nervous system development. We also found a predicted interaction between mRNA and specific microRNAs that are differentially regulated in GF mice. Interestingly, colonized GF mice (ex-GF) were behaviourally comparable to conventionally raised (CON) mice. Together, our data demonstrates a unique transcriptional response in GF animals, likely because of already elevated levels of immediate-early gene expression and the potentially underlying neuronal hyperactivity that in turn primes the amygdala for a different transcriptional response. Thus, we demonstrate for what is to our knowledge the first time that the presence of the host microbiome is crucial for the appropriate behavioural response during amygdala-dependent memory retention.

Wednesday, May 23, 2018

Research on subjective well-being.

Given the drum beat of daily negative news we all face, it is useful to be open to facts about longer term trends showing improvement in different areas of life (cf. my series of posts - starting on 4/1/18 - on Pinker's new book, "Enlightenment Now.") In this vein I pass on an open source review article from Nature Human Behavior by Diener et al. describing recent research on subjective well-being. The abstract:
The empirical science of subjective well-being, popularly referred to as happiness or satisfaction, has grown enormously in the past decade. In this Review, we selectively highlight and summarize key researched areas that continue to develop. We describe the validity of measures and their potential biases, as well as the scientific methods used in this field. We describe some of the predictors of subjective well-being such as temperament, income and supportive social relationships. Higher subjective well-being has been associated with good health and longevity, better social relationships, work performance and creativity. At the community and societal levels, cultures differ not only in their levels of well-being but also to some extent in the types of subjective well-being they most value. Furthermore, there are both universal and unique predictors of subjective well-being in various societies. National accounts of subjective well-being to help inform policy decisions at the community and societal levels are now being considered and adopted. Finally we discuss the unknowns in the science and needed future research.

Tuesday, May 22, 2018

Cognitive underpinnings of nationalistic ideology.

From Zmigrod et al.:

Significance
Belief in rigid distinctions between the nationalistic ingroup and outgroup has been a motivating force in citizens’ voting behavior, as evident in the United Kingdom’s 2016 EU referendum. We found that individuals with strongly nationalistic attitudes tend to process information in a more categorical manner, even when tested on neutral cognitive tasks that are unrelated to their political beliefs. The relationship between these psychological characteristics and strong nationalistic attitudes was mediated by a tendency to support authoritarian, nationalistic, conservative, and system-justifying ideologies. This suggests flexible cognitive styles are related to less nationalistic identities and attitudes.
Abstract
Nationalistic identities often play an influential role in citizens’ voting behavior and political engagement. Nationalistic ideologies tend to have firm categories and rules for what belongs to and represents the national culture. In a sample of 332 UK citizens, we tested whether strict categorization of stimuli and rules in objective cognitive tasks would be evident in strongly nationalistic individuals. Using voting behavior and attitudes from the United Kingdom’s 2016 EU referendum, we found that a flexible representation of national identity and culture was linked to cognitive flexibility in the ideologically neutral Wisconsin Card Sorting Test and Remote Associates Test, and to self-reported flexibility under uncertainty. Path analysis revealed that subjective and objective cognitive inflexibility predicted heightened authoritarianism, nationalism, conservatism, and system justification, and these in turn were predictive of support for Brexit and opposition to immigration, the European Union, and free movement of labor. This model accounted for 47.6% of the variance in support for Brexit. Path analysis models were also predictive of participants’ sense of personal attachment to the United Kingdom, signifying that individual differences in cognitive flexibility may contribute toward ideological thinking styles that shape both nationalistic attitudes and personal sense of nationalistic identity. These findings further suggest that emotionally neutral “cold” cognitive information processing—and not just “hot” emotional cognition—may play a key role in ideological behavior and identity.

Monday, May 21, 2018

Brain changes from adolescence to adulthood

Kundu et al. have used a new fMRI imaging technique to look at shifts in functional brain organization in subjects ranging from 8 to 46 years old, finding that localized networks characteristic of youth meld into larger and more functionally distinct networks with maturity. The number of blood oxygenation level-dependent (BOLD) components is halved from adolescence to the fifth decade of life, stabilizing in middle adulthood. The regions driving this change are dorsolateral prefrontal cortices, parietal cortex, and cerebellum. More dynamic regions correlate with skills that are works in progress during adolescence - developing a sense of self, monitoring one's performance, and estimating others' intentions.

Friday, May 18, 2018

Has artificial intelligence become alchemy?

Mathew Hutson describes Ali Rahimi's critique of current artificial intelligence (AI) learning algorithms at a recent AI meeting:
Rahimi charged that machine learning algorithms, in which computers learn through trial and error, have become a form of “alchemy.” Researchers, he said, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another...Without deep understanding of the basic tools needed to build and train new algorithms, he says, researchers creating AIs resort to hearsay, like medieval alchemists. “People gravitate around cargo-cult practices,” relying on “folklore and magic spells,” adds François Chollet, a computer scientist at Google in Mountain View, California. For example, he says, they adopt pet methods to tune their AIs' “learning rates”—how much an algorithm corrects itself after each mistake—without understanding why one is better than others.
Rahimi offers several suggestions for learning which algorithms work best, and when. For starters, he says, researchers should conduct “ablation studies” like those done with the translation algorithm: deleting parts of an algorithm one at a time to see the function of each component. He calls for “sliced analysis,” in which an algorithm's performance is analyzed in detail to see how improvement in some areas might have a cost elsewhere.
Ben Recht, a computer scientist at the University of California, Berkeley, and coauthor of Rahimi's alchemy keynote talk, says AI needs to borrow from physics, where researchers often shrink a problem down to a smaller “toy problem.” “Physicists are amazing at devising simple experiments to root out explanations for phenomena,” he says.
Csaba SzepesvĂ¡ri, a computer scientist at DeepMind in London, says the field also needs to reduce its emphasis on competitive testing. At present, a paper is more likely to be published if the reported algorithm beats some benchmark than if the paper sheds light on the software's inner workings.
Not everyone agrees with Rahimi and Recht's critique. Yann LeCun, Facebook's chief AI scientist in New York City, worries that shifting too much effort away from bleeding-edge techniques toward core understanding could slow innovation and discourage AI's real-world adoption. “It's not alchemy, it's engineering,” he says. “Engineering is messy.”

Thursday, May 17, 2018

Status threat explains 2016 presidential vote

Niraj Chokshi points to a PNAS article by Diana Mutz.  Mutz's abstract:

Significance
Support for Donald J. Trump in the 2016 election was widely attributed to citizens who were “left behind” economically. These claims were based on the strong cross-sectional relationship between Trump support and lacking a college education. Using a representative panel from 2012 to 2016, I find that change in financial wellbeing had little impact on candidate preference. Instead, changing preferences were related to changes in the party’s positions on issues related to American global dominance and the rise of a majority–minority America: issues that threaten white Americans’ sense of dominant group status. Results highlight the importance of looking beyond theories emphasizing changes in issue salience to better understand the meaning of election outcomes when public preferences and candidates’ positions are changing.
Abstract
This study evaluates evidence pertaining to popular narratives explaining the American public’s support for Donald J. Trump in the 2016 presidential election. First, using unique representative probability samples of the American public, tracking the same individuals from 2012 to 2016, I examine the “left behind” thesis (that is, the theory that those who lost jobs or experienced stagnant wages due to the loss of manufacturing jobs punished the incumbent party for their economic misfortunes). Second, I consider the possibility that status threat felt by the dwindling proportion of traditionally high-status Americans (i.e., whites, Christians, and men) as well as by those who perceive America’s global dominance as threatened combined to increase support for the candidate who emphasized reestablishing status hierarchies of the past. Results do not support an interpretation of the election based on pocketbook economic concerns. Instead, the shorter relative distance of people’s own views from the Republican candidate on trade and China corresponded to greater mass support for Trump in 2016 relative to Mitt Romney in 2012. Candidate preferences in 2016 reflected increasing anxiety among high-status groups rather than complaints about past treatment among low-status groups. Both growing domestic racial diversity and globalization contributed to a sense that white Americans are under siege by these engines of change.

Wednesday, May 16, 2018

The effort paradox - Effort is both costly and valued

Inzlicht et al. do an interesting review article on how we (as well as some other animals) associate effort with reward, sometimes pursuing objects and outcomes because of the effort they require rather than in spite of it. Effort can increase value retrospectively (as in the IKEA effect, valuing what you build more than what is ready-made), concurrently (as in Flow, enjoyment of energized focus), or prospective (as in effortful versus little effort pro-social work).

Their abstract and list of highlights:
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort’s role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time.
Highlights
Prominent models in the cognitive sciences indicate that mental and physical effort is costly, and that we avoid it. Here, we suggest that this is only half of the story.
Humans and non-human animals alike tend to associate effort with reward and will sometimes select objects or activities precisely because they require effort (e.g., mountain climbing, ultra-marathons).
Effort adds value to the products of effort, but effort itself also has value.
Effort’s value can not only be accessed concurrently with or immediately following effort exertion, but also in anticipation of such expenditure, suggesting that we already have an intuitive understanding of effort’s potential positive value.
If effort is consistently rewarded, people might learn that effort is valuable and become more willing to exert it in general.

Tuesday, May 15, 2018

Transparency is the mother of 'fake news'

Stanley Fish, writing in the NYTimes "The Stone" series suggests that fake news is in large part a product of the enthusiasm for transparency and absolutely free speech. I suggest you read the whole piece. Below are a few clips.

The problem is..
...that information, data and the unbounded flow of more and more speech can be politicized — it can, that is, be woven into a narrative that constricts rather than expands the area of free, rational choice. When that happens — and it will happen often — transparency and the unbounded flow of speech become instruments in the production of the very inequalities (economic, political, educational) that the gospel of openness promises to remove. And the more this gospel is preached and believed, the more that the answer to everything is assumed to be data uncorrupted by interests and motives, the easier it will be for interest and motives to operate under transparency’s cover.
This is so because speech and data presented as if they were independent of any mechanism of selectivity will float free of the standards of judgment that are always the content of such mechanisms. Removing or denying the existence of gatekeeping procedures will result not in a fair and open field of transparency but in a field where manipulation and deception find no obstacles. Because it is an article of their faith that politics are bad and the unmediated encounter with data is good, internet prophets will fail to see the political implications of what they are trying to do, for in their eyes political implications are what they are doing away with.
...human difference is irreducible, and there is no “neutral observation language” (a term of the philosopher Thomas Kuhn’s in his 1962 book “The Structure of Scientific Revolutions”) that can bridge, soften, blur and even erase the differences. When people from opposing constituencies clash there is no common language to which they can refer their differences for mechanical resolution; there are only political negotiations that would involve not truth telling but propaganda, threats, insults, deceptions, exaggerations, insinuations, bluffs, posturings — all the forms of verbal manipulation that will supposedly disappear in the internet nirvana.
They won’t. Indeed, they will proliferate because the politically angled speech that is supposedly the source of our problems is in fact the only possible route to their (no doubt temporary) solution. Speech proceeding from a point of view can at least be recognized as such and then countered. You say, “I know where those guys are coming from, and here are my reasons for believing that we should be coming from some place else” — and dialogue begins. It is dialogue inflected by interests and agendas, but dialogue still. But when speech (or information or data) is just sitting there inert, unattached to any perspective, when there are no guidelines, monitors, gatekeepers or filters, what you have are innumerable bits (like Lego) available for assimilation into any project a clever verbal engineer might imagine; and what you don’t have is any mechanism that can stop or challenge the construction project or even assess it. What you have, in short, are the perfect conditions for the unchecked proliferation of what has come to be called “fake news.”

Monday, May 14, 2018

Industrial revolutions are political and social wrecking balls.

I want to point to yet another impressive bit of research and writing by Thomas Edsall, who gives one of the most clear pictures I have seen of current political and economic changes. Edsall begins with a few quotes from Klaus Schwab at Davos in January 2016 on the bright side of the 'fourth industrial revolution' (cf. my posts on Schwab and Davos on Jan. 28, Jan. 29, and Feb. 9, 2016.) and then the downside. Compared with previous industrial revolutions,
...the fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.
And, in a huge understatement:
As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor.
Edsall quotes from various authors:
On balance, near-term AI will have the greatest effect on blue collar work, clerical work and other mid-skilled occupations. Given globalization’s effect on the 2016 presidential election, it is worth noting that near-term AI and globalization replace many of the same jobs...Robots, autonomous vehicles, virtual reality, artificial intelligence, machine learning, drones and the Internet of Things are moving ahead rapidly and transforming the way businesses operate and how people earn their livelihoods. For millions who work in occupations like food service, retail sales and truck driving, machines are replacing their jobs.
AI’s near-term effect will not be mass unemployment but occupational polarization resulting in a slowly growing number of persons moving from mid-skilled jobs into lower wage work
The concern is not that robots will take human jobs and render humans unemployable. The traditional economic arguments against that are borne out by centuries of experience...the problem lies in the process of turnover, which could lead to sustained periods of time with a large fraction of people not working...not all workers will have the training or ability to find the new jobs created by AI. Moreover, this “short run” could last for decades and, in fact, the economy could be in a series of “short runs” for even longer.
A populist politician who campaigned on AI-induced job loss would start with ready-made definitions of the "people” and the “elite” based on national fault lines that were sharpened in the 2016 presidential election. This politician also would have a ready-made example of disrespect: the set of highly educated coastal “elites” who make a very good living developing robots to put “the people” out of work.
...both technology and trade seem to drive structural changes which are consequential for voting behavior...Job losses in manufacturing due to automation do create fertile territory for continued populist appeal...In fact, some of the places where Trump made the biggest gains relative to McCain or Romney are in the heartland of heavy manufacturing where robots did lead to losses of manufacturing jobs...David Autor, an economist at M.I.T., examined the political consequences in congressional districts hurt by increased trade with China and found a significant increase in the election of very conservative Republicans.
Rather than directly opposing free trade policies, individuals in import-exposed communities tend to target scapegoats such as immigrants and minorities. This drives support for right-wing candidates, as they compete electorally by targeting out-groups...in areas affected by trade, the scapegoating of immigrants takes place across the board and is not limited to manufacturing workers.
The hard core of Trump’s voters — more than half of all Republican voters don’t just approve of him, but strongly approve — have, in turn, demonstrated a willingness to deify the president no matter what he does or says — a deification dependent in no small part on Trump’s adoption of new communications technologies like Twitter.
The determination of the Trump wing of the Republican Party to profiteer on technologically driven economic and cultural upheaval — and the success of this strategy to date — suggests that the party will continue on its path. For this reason and many others, it is critically important that Democrats develop a more far-reaching understanding of the disruptive, technologically fueled economic and cultural forces that are now shaping American politics — if they intend to steer the country in a more constructive direction, that is.



Friday, May 11, 2018

How not to mind if someone is lying...

Daniel Effron describes the means by which Trump supporters, aware that many of his statements are falsehoods, manage to temper their potential anger.
Mr. Trump’s representatives have used a subtle psychological strategy to defend his falsehoods: They encourage people to reflect on how the falsehoods could have been true.
Effron's research has confirmed the effectiveness of this tactic. He asked
...2,783 Americans from across the political spectrum to read a series of claims that they were told (correctly) were false. Some claims, like the falsehood about the inauguration crowd, appealed to Mr. Trump’s supporters, and some appealed to his opponents: for instance, a false report (which circulated widely on the internet) that Mr. Trump had removed a bust of the Rev. Dr. Martin Luther King Jr. from the Oval Office.
All the participants were asked to rate how unethical it was to tell the falsehoods. But half the participants were first invited to imagine how the falsehood could have been true if circumstances had been different. For example, they were asked to consider whether the inauguration would have been bigger if the weather had been nicer, or whether Mr. Trump would have removed the bust if he could have gotten away with it.
The results of the experiments... show that reflecting on how a falsehood could have been true did cause people to rate it as less unethical to tell — but only when the falsehoods seemed to confirm their political views. Trump supporters and opponents both showed this effect.
Again, the problem wasn’t that people confused fact and fiction; virtually everyone recognized the claims as false. But when a falsehood resonated with people’s politics, asking them to imagine counterfactual situations in which it could have been true softened their moral judgments. A little imagination can apparently make a lie feel “truthy” enough to give the liar a bit of a pass.
These results reveal a subtle hypocrisy in how we maintain our political views. We use different standards of honesty to judge falsehoods we find politically appealing versus unappealing. When judging a falsehood that maligns a favored politician, we ask, “Was it true?” and then condemn it if the answer is no.
In this time of “fake news” and “alternative facts,”...Even when partisans agree on the facts, they can come to different moral conclusions about the dishonesty of deviating from those facts. The result is more disagreement in an already politically polarized world.

Thursday, May 10, 2018

More on the vagaries of expertise

To follow up on yesterday's post on the illusion of having skills, I want to point to two other articles in this vein. Herrera notes work showing that:
...in group-work settings, instead of determining whether a given person has genuine expertise we sometimes focus on proxies of expertise — the traits and habits we associate, and often conflate, with expertise. That means qualities such as confidence, extroversion and how much someone talks can outweigh demonstrated knowledge when analyzing whether a person is an expert...In other words, your brain can instinctively trust people simply because they sound as if they know what they’re talking about.
And, Gibson reviews the work of Tom Nichols, reflected in his book "The Death of Expertise: The Campaign Against Expertise and Why It Matters." Nichols...
...had begun noticing what he perceived as a new and accelerating—and dangerous—hostility toward established knowledge. People were no longer merely uninformed, Nichols says, but “aggressively wrong” and unwilling to learn. They actively resisted facts that might alter their preexisting beliefs. They insisted that all opinions, however uninformed, be treated as equally serious. And they rejected professional know-how, he says, with such anger. That shook him.
Skepticism toward intellectual authority is bone-deep in the American character, as much a part of the nation’s origin story as the founders’ Enlightenment principles. Overall, that skepticism is a healthy impulse, Nichols believes. But what he was observing was something else, something malignant and deliberate, a collapse of functional citizenship. “Americans have reached a point where ignorance, especially of anything related to public policy, is an actual virtue...To reject the advice of experts is to assert autonomy, a way for Americans to insulate their increasingly fragile egos from ever being told they’re wrong about anything.”
Readers regularly approach Nichols with stories of their own disregarded expertise: doctors, lawyers, plumbers, electricians who’ve gotten used to being second-guessed by customers and clients and patients who know little or nothing about their work. “So many people over the past year have walked up to me and said, ‘You wrote what I was thinking,’” he says.

Wednesday, May 09, 2018

The illusion of skill acquisition.

Kardas and O'Brien document how watching others perform can foster an illusion of skill acquisition. Their abstract:
Modern technologies such as YouTube afford unprecedented access to the skilled performances of other people. Six experiments (N = 2,225) reveal that repeatedly watching others can foster an illusion of skill acquisition. The more people merely watch others perform (without actually practicing themselves), the more they nonetheless believe they could perform the skill, too (Experiment 1). However, people’s actual abilities—from throwing darts and doing the moonwalk to playing an online game—do not improve after merely watching others, despite predictions to the contrary (Experiments 2–4). What do viewers see that makes them think they are learning? We found that extensive viewing allows people to track what steps to take (Experiment 5) but not how those steps feel when taking them. Accordingly, experiencing a “taste” of performing attenuates the illusion: Watching others juggle but then holding the pins oneself tempers perceived change in one’s own ability (Experiment 6). These findings highlight unforeseen problems for self-assessment when watching other people.

Tuesday, May 08, 2018

Protein synthesis in brain tissue is much higher than previously thought.

Smeets et al. use stable isotope methodology during temporal lobe resection surgery to demonstrate protein synthesis rates exceeding 3% per day, suggesting that brain tissue plasticity is far greater than previously assumed.
All tissues undergo continuous reconditioning via the complex orchestration of changes in tissue protein synthesis and breakdown rates. Skeletal muscle tissue has been well studied in this regard, and has been shown to turnover at a rate of 1–2% per day in vivo in humans. Few data are available on protein synthesis rates of other tissues. Because of obvious limitations with regard to brain tissue sampling no study has ever measured brain protein synthesis rates in vivo in humans. Here, we applied stable isotope methodology to directly assess protein synthesis rates in neocortex and hippocampus tissue of six patients undergoing temporal lobectomy for drug-resistant temporal lobe epilepsy (Clinical trial registration: NTR5147). Protein synthesis rates of neocortex and hippocampus tissue averaged 0.17 ± 0.01 and 0.13 ± 0.01%/h, respectively. Brain tissue protein synthesis rates were 3–4-fold higher than skeletal muscle tissue protein synthesis rates (0.05 ± 0.01%/h; P < 0.001). In conclusion, the protein turnover rate of the human brain is much higher than previously assumed.

Monday, May 07, 2018

Electrical brain stimulation enhances visual memory performance

Kucewicz et al. study patients with epilepsy undergoing evaluation for resective surgery to show that stimulation of the lateral temporal cortex, but not the hippocampus, parahippocampal neocortex or prefrontal cortex, increases the number of words that patients can remember.:
Direct electrical stimulation of the human brain can elicit sensory and motor perceptions as well as recall of memories. Stimulating higher order association areas of the lateral temporal cortex in particular was reported to activate visual and auditory memory representations of past experiences (Penfield and Perot, 1963). We hypothesized that this effect could be used to modulate memory processing. Recent attempts at memory enhancement in the human brain have been focused on the hippocampus and other mesial temporal lobe structures, with a few reports of memory improvement in small studies of individual brain regions. Here, we investigated the effect of stimulation in four brain regions known to support declarative memory: hippocampus, parahippocampal neocortex, prefrontal cortex and temporal cortex. Intracranial electrode recordings with stimulation were used to assess verbal memory performance in a group of 22 patients (nine males). We show enhanced performance with electrical stimulation in the lateral temporal cortex (paired t-test, P = 0.0067), but not in the other brain regions tested. This selective enhancement was observed both on the group level, and for two of the four individual subjects stimulated in the temporal cortex. This study shows that electrical stimulation in specific brain areas can enhance verbal memory performance in humans.

Friday, May 04, 2018

A morning cortisol pulse is crucial for normal cognitive and emotional responses.

From Kalafatakis et. al.:

Significance
The hypothalamic-pituitary-adrenal axis is a critical neurohormonal network regulating homeostasis and coordinating stress responses. Here we demonstrate that an oscillating pattern of plasma cortisol is important for maintenance of healthy brain responses as measured by functional neuroimaging and behavioral testing. Our data highlight the crucial role of glucocorticoid rhythmicity in (i) modulating sleep behavior and working memory performance, and (ii) regulating the human brain’s responses under emotional stimulation. Current optimal cortisol replacement therapies for patients with primary or secondary adrenal insufficiently are associated with poor psychological status, and these results suggest that closer attention to aspects of chronotherapy will benefit these patients and may also have major implications for improved glucocorticoid dynamics in stress and psychiatric disease.
Abstract
Glucocorticoids (GCs) are secreted in an ultradian, pulsatile pattern that emerges from delays in the feedforward-feedback interaction between the anterior pituitary and adrenal glands. Dynamic oscillations of GCs are critical for normal cognitive and metabolic function in the rat and have been shown to modulate the pattern of GC-sensitive gene expression, modify synaptic activity, and maintain stress responsiveness. In man, current cortisol replacement therapy does not reproduce physiological hormone pulses and is associated with psychopathological symptoms, especially apathy and attenuated motivation in engaging with daily activities. In this work, we tested the hypothesis that the pattern of GC dynamics in the brain is of crucial importance for regulating cognitive and behavioral processes. We provide evidence that exactly the same dose of cortisol administered in different patterns alters the neural processing underlying the response to emotional stimulation, the accuracy in recognition and attentional bias toward/away from emotional faces, the quality of sleep, and the working memory performance of healthy male volunteers. These data indicate that the pattern of the GC rhythm differentially impacts human cognition and behavior under physiological, nonstressful conditions and has major implications for the improvement of cortisol replacement therapy.

Thursday, May 03, 2018

Sitting is bad for your brain...

It is well known that sitting for long periods each day correlates with higher risk of heart disease, diabetes, and mortality rate. Siddarth et al. at UCLA now show a correlation of sedentary behavior with reduced thickness of the medial temporal lobe of our brains, which contains the hippocampus and is central to learning and memory. Their abstract:
Atrophy of the medial temporal lobe (MTL) occurs with aging, resulting in impaired episodic memory. Aerobic fitness is positively correlated with total hippocampal volume, a heavily studied memory-critical region within the MTL. However, research on associations between sedentary behavior and MTL subregion integrity is limited. Here we explore associations between thickness of the MTL and its subregions (namely CA1, CA23DG, fusiform gyrus, subiculum, parahippocampal, perirhinal and entorhinal cortex,), physical activity, and sedentary behavior. We assessed 35 non-demented middle-aged and older adults (25 women, 10 men; 45–75 years) using the International Physical Activity Questionnaire for older adults, which quantifies physical activity levels in MET-equivalent units and asks about the average number of hours spent sitting per day. All participants had high resolution MRI scans performed on a Siemens Allegra 3T MRI scanner, which allows for detailed investigation of the MTL. Controlling for age, total MTL thickness correlated inversely with hours of sitting/day (r = -0.37, p = 0.03). In MTL subregion analysis, parahippocampal (r = -0.45, p = 0.007), entorhinal (r = -0.33, p = 0.05) cortical and subiculum (r = -0.36, p = .04) thicknesses correlated inversely with hours of sitting/day. No significant correlations were observed between physical activity levels and MTL thickness. Though preliminary, our results suggest that more sedentary non-demented individuals have less MTL thickness. Future studies should include longitudinal analyses and explore mechanisms, as well as the efficacy of decreasing sedentary behaviors to reverse this association.

Wednesday, May 02, 2018

Brain signals predict who we will like in the future.

From Zerubavel et al.:

Significance
When joining a group, we may initially like some individuals more than others. Likewise, certain group members may be particularly drawn to us. Over months of interaction, these attractions inevitably change and typically become reciprocated. This study uses fMRI to predict such changes in liking. Specifically, we measure newly acquainted group members’ reward system responses to images of one another’s faces. We find that T1 neural responses predict whom one will like in the future. More strikingly, we find that others’ T1 neural responses to us predict whom we will like months later, at T2. This brain-based mechanism helps explain how group members’ initially unreciprocated liking sentiments become mutually reciprocated. This study reveals how our brains interdependently shape interpersonal relationships.
Abstract
Why do certain group members end up liking each other more than others? How does affective reciprocity arise in human groups? The prediction of interpersonal sentiment has been a long-standing pursuit in the social sciences. We combined fMRI and longitudinal social network data to test whether newly acquainted group members’ reward-related neural responses to images of one another’s faces predict their future interpersonal sentiment, even many months later. Specifically, we analyze associations between relationship-specific valuation activity and relationship-specific future liking. We found that one’s own future (T2) liking of a particular group member is predicted jointly by actor’s initial (T1) neural valuation of partner and by that partner’s initial (T1) neural valuation of actor. These actor and partner effects exhibited equivalent predictive strength and were robust when statistically controlling for each other, both individuals’ initial liking, and other potential drivers of liking. Behavioral findings indicated that liking was initially unreciprocated at T1 yet became strongly reciprocated by T2. The emergence of affective reciprocity was partly explained by the reciprocal pathways linking dyad members’ T1 neural data both to their own and to each other’s T2 liking outcomes. These findings elucidate interpersonal brain mechanisms that define how we ultimately end up liking particular interaction partners, how group members’ initially idiosyncratic sentiments become reciprocated, and more broadly, how dyads evolve. This study advances a flexible framework for researching the neural foundations of interpersonal sentiments and social relations that—conceptually, methodologically, and statistically—emphasizes group members’ neural interdependence.

Tuesday, May 01, 2018

You're as old as you feel...

The cliche continues to be backed up by data indicating that it is correct. Marlene Cimons does a piece pointing to recent work on attitude and aging. A well know paradox of old age is that as people's minds and bodies deline, instead of feeling worse about their lives, they feel better. A large internet survey by Chopik et al. of over half a million people showed that, consistently across age groups, people report themselves to feel about 20% younger than their current age. People who go against common negative view about aging to feel positive about aging are less likely to develop dementia. In a study of 4,765 dementia free subjects age 60 or older Levy et al. found that people with the  Îµ4 variant of the APOE gene (one of the strongest risk factors for dementia) who had positive age beliefs were 50% less likely to develop dementia.

Monday, April 30, 2018

A workshop on music and the brain.

I want to point to this open access article describing an NIH/Kennedy Center workshop on music and the brain, hosted by National Institutes of Health (NIH) Director Francis Collins, soprano Renée Fleming, and Kennedy Center (KC) President Deborah Rutter. Descriptions of the various workshops, in addition to waffling and hot air, include some useful links to basic research articles on music and the brain. Here is a clip from the introduction:
The workshop was organized around the three life stages—childhood, adulthood, and aging. In each session, a panel of 25 experts (listed in Table 1) discussed recent breakthroughs in research and their potential therapeutic applications. Over the course of a day and a half, the panelists recommended basic and applied research that will: (1) increase our understanding of how the brain processes music; (2) lead to scientifically based strategies to enhance normal brain development and function; and (3) result in evidence-based music interventions for brain diseases. In the sections that follow, we will review the discussions from the workshop and highlight the major recommendations that emerged. Finally, we will discuss how funding agencies, scientists, clinicians, and supporters of the arts can work together to catalyze further progress.
The article is worth a read for those (like myself) interested in music and the brain. The workshop on music and the adult brain discusses the effect of musical training on adult brain structure and function. Here are the topics:
“Building”: Music and the Child’s Brain
Music as a Therapeutic Intervention in Children
“Engaging”: Music and the Adult Brain
Music as a Therapeutic Intervention in Adults: Overlapping Circuits Suggest Potential Mechanisms
“Sustaining”: Music and the Aging Brain
Music as a Tool for Restoring Function in the Aging Brain

Friday, April 27, 2018

Risk tolerance is predicted by amygdala-prefrontal cortex connectivity

Hoon et al. show that more nerve connections between our amygdala and the rest of the brain increase our tolerance for risk.

Highlights
•Neural markers for risk tolerance were investigated with multimodal imaging data 
•Risk tolerance correlated with amygdala-medial prefrontal cortex connectivity 
•Risk tolerance correlated with amygdala structure
Summary
Risk tolerance, the degree to which an individual is willing to tolerate risk in order to achieve a greater expected return, influences a variety of financial choices and health behaviors. Here we identify intrinsic neural markers for risk tolerance in a large (n = 108) multimodal imaging dataset of healthy young adults, which includes anatomical and resting-state functional MRI and diffusion tensor imaging. Using a data-driven approach, we found that higher risk tolerance was most strongly associated with greater global functional connectivity (node strength) of and greater gray matter volume in bilateral amygdala. Further, risk tolerance was positively associated with functional connectivity between amygdala and medial prefrontal cortex and negatively associated with structural connectivity between these regions. These findings show how the intrinsic functional and structural architecture of the amygdala, and amygdala-medial prefrontal pathways, which have previously been implicated in anxiety, are linked to individual differences in risk tolerance during economic decision making.

Thursday, April 26, 2018

Propagation of economic inequality through reciprocity and reputation.

Interesting work from Hackel and Zaki...an excerpt from their introduction, followed by their abstract:
Reciprocity and reputation are cornerstones of both evolutionary accounts of prosociality and evidence-based policy suggestions for amplifying cooperation on a large scale. For instance, people are more likely to vote, donate blood, and conserve energy when their actions are observable by others.
In studies of reciprocity, participants typically start out with an even distribution of wealth By contrast, the real world features enormous and rising economic inequality. We propose that when initial distributions of wealth are unequal, reciprocity and reputation might exacerbate economic inequality.
One possible mechanism is reward-based reinforcement learning, through which people associate actions with rewards Consider two “givers,” one of whom starts with a $100 endowment and the other of whom starts with a $20 endowment. If each giver shares half of his or her resources, each exhibits equal levels of generosity but provides differing levels of reward value, or raw capital, to beneficiaries. When people experience repeated pairings of a stimulus with reward, they are more likely to return to that stimulus . Similarly, we suggest that rewards build positive affect toward another person—even when those rewards do not reflect the giver’s generosity—and these positive associations can color later choices of people with whom to interact.
The abstract:
Reciprocity and reputation are powerful tools for encouraging cooperation on a broad scale. Here, we highlight a potential side effect of these social phenomena: exacerbating economic inequality. In two novel economic games, we manipulated the amount of money with which participants were endowed and then gave them the opportunity to share resources with others. We found that people reciprocated more toward higher-wealth givers, compared with lower-wealth givers, even when those givers were equally generous. Wealthier givers also achieved better reputations than less wealthy ones and therefore received more investments in a social marketplace. These discrepancies were well described by a formal model of reinforcement learning: Individuals who weighted monetary outcomes, rather than generosity, when learning about interlocutors also most strongly helped wealthier individuals. This work demonstrates that reciprocity and reputation—although globally increasing prosociality—can widen wealth gaps and provides a precise account of how inequality grows through social processes.

Wednesday, April 25, 2018

Seeing what you feel - unconscious affect drives perception

Siegle et al. provide yet another example of how it is impossible to separate emotions from cognition and perception:
Affective realism, the phenomenon whereby affect is integrated into an individual’s experience of the world, is a normal consequence of how the brain processes sensory information from the external world in the context of sensations from the body. In the present investigation, we provided compelling empirical evidence that affective realism involves changes in visual perception (i.e., affect changes how participants see neutral stimuli). In two studies, we used an interocular suppression technique, continuous flash suppression, to present affective images outside of participants’ conscious awareness. We demonstrated that seen neutral faces are perceived as more smiling when paired with unseen affectively positive stimuli. Study 2 also demonstrated that seen neutral faces are perceived as more scowling when paired with unseen affectively negative stimuli. These findings have implications for real-world situations and challenge beliefs that affect is a distinct psychological phenomenon that can be separated from cognition and perception.

Tuesday, April 24, 2018

Why exercise alone may not cause weight loss.

Gretchen Reynolds points to work by Lark et al. showing that mice given the opportunity to exercise on a running wheel then are more lazy the rest of the time than mice who don't exercise. Thus the effects of voluntary exercise can be countered by a reduction in nonexercise activity. This may explain why numerous studies in recent years examining exercise and weight loss in both humans and animals have concluded that exercise, by itself, is not an effective way to drop pounds.

Monday, April 23, 2018

How our "I" is like a virtual reality headset.

I pass on some clips from Joshua Rothman's article on the ideas of Metzinger, Blanke and others regarding the actual nature of our experienced selves, ideas that rise from virtual embodiment experiments in which subjects become convinced that they are someone else. The work challenges our understanding of who and what we are.
…reality, as we experience it, might be a mental stage set—a representation of the world, rather than the world itself. Having an O.B.E. (out of body experience) could be like visiting the set at night, when it wasn’t being used…Some internal mental system must function as an invisible, unconscious set dresser, making an itch feel like an itch, coloring the sky blue and the grass green.
It isn’t just that we live inside a model of the external world, Metzinger wrote. We also live inside models of our own bodies, minds, and selves. These “self-models” don’t always reflect reality, and they can be adjusted in illogical ways. They can, for example, portray a self that exists outside of the body—an O.B.E.
Metzinger and Blanke set about hacking the self-model. Along with the cognitive scientists Bigna Lenggenhager and Tej Tadi, they created a virtual-reality system designed to induce O.B.E.-like episodes. In 2005, Metzinger put on a virtual-reality head-mounted display—a headset containing a pair of screens, one for each eye, which together produce the illusion of a 3-D world. Inside, he saw his own body, facing away from him, standing in a room. (It was being filmed by a camera placed six feet behind him.) He watched as Lenggenhager stroked its back. Metzinger could feel the stroking, but the body to which it was happening seemed to be situated in front of him. He felt a strange sensation, as though he were drifting in space, or being stretched between the two bodies. He wanted to jump entirely into the body before him, but couldn’t. He seemed marooned outside of himself. It wasn’t quite an out-of-body experience, but it was proof that, using computer technology, the self-model could easily be manipulated. A new area of research had been created: virtual embodiment.
With a team of various collaborators, Slater and Sanchez-Vives have created many other-body simulations; they show how inhabiting a new virtual body can produce meaningful psychological shifts. In one study, participants are re-embodied as a little girl. Surrounded by a stuffed bear, a rocking horse, and other toys, they watch as their mother sternly demands a cleaner room. Afterward, on psychological tests, they associate themselves with more childlike characteristics. (When I tried it, under the supervision of the V.R. researcher Domna Banakou, I was astonished by my small size, and by the intimidating, Olympian height from which the mother addressed me.) In another, white participants spend around ten minutes in the body of a virtual black person, learning Tai Chi. Afterward, their scores on a test designed to reveal unconscious racial bias shift significantly. “These effects happen fast, and seem to last,” Slater said. A week later, the white participants still had less racist attitudes. (The racial-bias results have been replicated several times in Barcelona, and also by a second team, in London.) Embodied simulations seem to slip beneath the cognitive threshold, affecting the associative, unconscious parts of the mind. “It’s directly experiential,” Slater said. “It’s not ‘I know.’ It’s ‘I am.’ ”
 “I think that, in the human self-model, there are many layers. Some layers are transparent, like your bodily perceptions, which seem absolutely real. You just look”—he gestured toward a chair next to us—“and the chair is there. Others are opaque, like our cognitive layer. When we’re thinking, we know that our thoughts are internal mental constructs, which could be true or false.” As a philosopher, Metzinger’s method has been to see if the transparent can be made opaque. In books such as “Being No One” and “The Ego Tunnel,” he aims to show that aspects of our experience which we take to be real are actually “complex forms of virtual reality” created by our brains.
Imagine that you are sitting in the cockpit of an airplane, surrounded by instruments and controls. It’s a futuristic cockpit, with no windows; where the windshield would be, a computer displays the landscape. Using this cockpit, you can pilot your plane with ease. Still, there are questions you are unable to answer. Exactly what kind of plane are you flying? (It could be a Boeing 777 or an Airbus A380.) How accurate is the landscape on the screen? (Perhaps night-vision software has turned night to day.) When you throttle up the engines, you feel a rumbling and hear a roar. Does this mean the plane is accelerating—or could those effects have been simulated? Both scenarios might be true. You could be using a flight simulator to fly a real plane. This, in Metzinger’s view, is how we live our lives.
The instruments in an airplane cockpit report on pitch, yaw, speed, fuel, altitude, engine status, and so on. Our human instruments report on more complicated variables. They tell us about physical facts: the status of our bodies and limbs. But they also report on mental states: on what we are sensing, feeling, and thinking; on our intentions, knowledge, and memories; on where and who we are. You might wonder who is sitting in the cockpit, controlling everything. Metzinger thinks that no one is sitting there. “We” are the instruments, and our sense of selfhood is the sum of their readouts. On the instrument panel, there is a light with a label that says “Pilot Present.” When the light is on, we are self-conscious; we experience being in the cockpit and monitoring the instruments. It’s easy to assume that, while you’re awake, this light is always on. In fact, it’s frequently off—during daydreams, during much of our mental life, which is largely automatic and unconscious—and the plane still flies.
Two facts about the cockpit are of special importance. The first is that although the cockpit controls the airplane, it is not itself an airplane. It’s only a simulation—a model—of a larger, more complex, and very different machine. The implication of this fact is that the stories we tell about what happens in the cockpit—“I pulled up on the stick”; “I touched my jacket”—are very different from the reality of what is happening to the system as a whole. The second fact, harder to grasp, is that we cannot see the cockpit. Even as we consult its models of the outer and inner worlds, we don’t experience ourselves as doing so; we experience ourselves as simply existing. “You cannot recognize your self-model as a model,” Metzinger writes, in “Being No One.” “It is transparent: you look right through it. You don’t see it. But you see with it.” Our mental models of reality are like V.R. headsets that we don’t know we are wearing. Through them, we experience our own inner lives and have inner sensations that feel as solid as stone. But in truth:
Nobody ever was or had a self. All that ever existed were conscious self-models that could not be recognized as models. . . . You are such a system right now. . . . As you read these sentences, you constantly confuse yourself with the content of the self-model activated by your brain.
Do you know what an ‘illusion of control’ is?” he asked, mischievously. “If people are asked to throw dice, and their task is to throw a high number, they throw the dice harder!” He believes that many experiences of being in control are similarly illusory, including experiences in which we seem to control our own minds. Brain imaging, for example, shows that our thoughts begin before we’re aware of having them. But, Metzinger said, “if a thought crosses the boundary from unconsciousness to consciousness, we feel, ‘I caused this thought.’ ” The sensation of causing our own thoughts is also just another feature of the self-model—a phantom sensation conjured when a readout, labelled “thinking,” switches from “off” to “on.” If you suffer from schizophrenia, this readout may be deactivated inappropriately, and you may feel that someone else is causing your thoughts. “The mind has to explain to itself how it works,” he said, spreading his hands.
Lately, Metzinger has been thinking about his own experience as a meditator. At the center of the meditative experience is the exercise and cultivation of mental autonomy: when the meditator’s mind wanders, he notices and arrests that process, gently returning his mental focus to his breath. “The mind says, ‘I am now re-directing the flashlight of my attention to this,’ ” Metzinger said. “But the thought ‘I am redirecting my mind-wandering’ might itself be another inner story.” He leaned back in his chair and laughed. “It might be that the spiritual endeavor for liberation or detachment can lead to new illusions.”
…you are not the model. You are the whole system—the physical, biological organism in which the self-model is rendered, including its body, its social relationships, and its brain. The model is just a part of that system.” The “I” we experience is smaller than, and different from, the totality of who and what we are.
It turns out that we do, in this sense, possess subtle bodies; we also inhabit subtle selves. While a person exists, he feels that he knows the world and himself directly. In fact, he experiences a model of the world and inhabits a model of himself. These models are maintained by the mind in such a way that their constructed nature is invisible. But it can sometimes be made visible, and then—to a degree—the models can be changed. Something about this discovery is deflating: it turns out that we are less substantial than we thought. Yet it can also be invigorating to understand the constructed, provisional nature of experience. Our perceptions of the world and the self feel real—how could they feel otherwise?—but we can come to understand our own role in the creation of their apparent realness.

Friday, April 20, 2018

Andy Clark on extended mind, A.I., and predictive processing

I want to point to a New Yorker article by Larissa MacFarquhar describing the evolution of the ideas of philosopher of mind Andy Clark.

The first section of the article follows Clark's development of the idea that our minds must be defined as extended beyond our bodies to include the tools in our environment without which they cannot function:
Clark started musing about the ways in which even adult thought was often scaffolded by things outside the head. There were many kinds of thinking that weren’t possible without a pen and paper, or the digital equivalent—complex mathematical calculations, for instance. Writing prose was usually a matter of looping back and forth between screen or paper and mind: writing something down, reading it over, thinking again, writing again. The process of drawing a picture was similar. The more he thought about these examples, the more it seemed to him that to call such external devices “scaffolding” was to underestimate their importance. They were, in fact, integral components of certain kinds of thought. And so, if thinking extended outside the brain, then the mind did, too.
It then describes his moving into artificial intelligence and robotics, encountering the work of Rodney Brooks at M.I.T:
Maybe the way to go was building an intelligence that developed gradually, as in children—seeing and walking first. Perhaps intelligence of many kinds, even the sort that solved theorems and played chess, emerged from the most basic skills—perception, motor control...While constructing a robot that he called Allen, Brooks decided that the best way to build its cognition box was to scrap it altogether. ...It was controlled by three objectives—avoid obstacles, wander randomly, seek distance—layered in a hierarchy, such that the higher could override the lower...It would make no plans. It would simply encounter the world and react.
Robots like Allen... seemed to Clark to represent a fundamentally different idea of the mind. Watching them fumble about, pursuing their simple missions, he recognized that cognition was not the dictates of a high-level central planner perched in a skull cockpit, directing the activities of the body below. Central planning was too cumbersome, too slow to respond to the body’s emergencies. Cognition was a network of partly independent tricks and strategies that had evolved one by one to address various bodily needs. Movement, even in A.I., was not just a lower, practical function that could be grafted, at a later stage, onto abstract reason. The line between action and thought was more blurry than it seemed. A creature didn’t think in order to move: it just moved, and by moving it discovered the world that then formed the content of its thoughts.
Then, how does the brain make sense of the world?
To some people, perception—the transmitting of all the sensory noise from the world—seemed the natural boundary between world and mind. Clark had already questioned this boundary with his theory of the extended mind. Then, in the early aughts, he heard about a theory of perception that seemed to him to describe how the mind, even as conventionally understood, did not stay passively distant from the world but reached out into it. It was called predictive processing.
It appeared that the brain had ideas of its own about what the world was like, and what made sense and what didn’t, and those ideas could override what the eyes (and other sensory organs) were telling it. Perception did not, then, simply work from the bottom up; it worked first from the top down. What you saw was not just a signal from the eye, say, but a combination of that signal and the brain’s own ideas about what it expected to see, and sometimes the brain’s expectations took over altogether.
One major difficulty with perception, Clark realized, was that there was far too much sensory signal continuously coming in to assimilate it all. The mind had to choose. And it was not in the business of gathering data for its own sake: the original point of perceiving the world was to help a creature survive in it. For the purpose of survival, what was needed was not a complete picture of the world but a useful one—one that guided action. A brain needed to know whether something was normal or strange, helpful or dangerous. The brain had to infer all that, and it had to do it very quickly, or its body would die—fall into a hole, walk into a fire, be eaten.
So what did the brain do? It focussed on the most urgent or worrying or puzzling facts: those which indicated something unexpected. Instead of taking in a whole scene afresh each moment, as if it had never encountered anything like it before, the brain focussed on the news: what was different, what had changed, what it didn’t expect...This process was not only fast but also cheap—it saved on neural bandwidth, because it took on only the information it needed—which made sense from the point of view of a creature trying to survive...To Clark, predictive processing described how mind, body, and world were continuously interacting, in a way that was mostly so fluid and smoothly synchronized as to remain unconscious.
And, summarizing paragraphs,
He knew that the roboticist Rodney Brooks had recently begun to question a core assumption of the whole A.I. project: that minds could be built of machines. Brooks speculated that one of the reasons A.I. systems and robots appeared to hit a ceiling at a certain level of complexity was that they were built of the wrong stuff—that maybe the fact that robots were not flesh made more of a difference than he’d realized. Clark couldn’t decide what he thought about this. On the one hand, he was no longer a machine functionalist, exactly: he no longer believed that the mind was just a kind of software that could run on hardware of various sorts. On the other hand, he didn’t believe, and didn’t want to believe, that a mind could be constructed only out of soft biological tissue. He was too committed to the idea of the extended mind—to the prospect of brain-machine combinations, to the glorious cyborg future—to give it up.
In a way, though, the structure of the brain itself had some of the qualities that attracted him to the extended-mind view in the first place: it was not one indivisible thing but millions of quasi-independent things, which worked seamlessly together while each had a kind of existence of its own. “There’s something very interesting about life,” Clark says, “which is that we do seem to be built of system upon system upon system. The smallest systems are the individual cells, which have an awful lot of their own little intelligence, if you like—they take care of themselves, they have their own things to do. Maybe there’s a great flexibility in being built out of all these little bits of stuff that have their own capacities to protect and organize themselves. I’ve become more and more open to the idea that some of the fundamental features of life really are important to understanding how our mind is possible. I didn’t use to think that. I used to think that you could start about halfway up and get everything you needed.”







Thursday, April 19, 2018

Baby Boomers reaching the end of their To-Do list

A few clips from an engaging piece by Patricia Hampl on the maturing of the baby boomer generation, those born in 1946 or later - who came of age during the Vietnam War era. (Born in 1942, I qualify as being on the leading edge of this generation.)
Life, if you’re lucky, is divided into thirds, my father used to say: youth, middle age and “You look good.”...By the time you’ve worked long enough, hard enough, real life begins to reveal itself as something other than effort, other than accomplishment...It’s a late-arriving awareness of consciousness existing for its own sake...in this latter stage of existence, to have only one task: to waste life in order to find it.
...now the boomers are approaching the other side...the other side of striving...The battle between striving and serenity may be distinctly American. The struggle between toil and the dream of ease is an American birthright, the way a Frenchman expects to have decent wine at a reasonable price, and the whole month of August on vacation...The essential American word isn’t happiness. It’s pursuit.
But how about just giving up? What about wasting time? Giving up or perhaps giving over. To what? Perhaps what an earlier age called “the life of the mind,” the phrase that describes the sovereign self at ease, at home in the world. This isn’t the mind of rational thought, but the lost music of wondering, the sheer value of looking out the window, letting the world float along...That’s what that great American lounger Whitman did. “I loaf and invite my soul,” he wrote. “I lean and loaf at my ease, observing a spear of summer grass.” In this way he came to his great conception of national citizenship for Americans, “the dear love of comrades.” His loafing led to solidarity.
Loafing is not a prudent business plan, not even a life plan, not a recognizably American project. But it begins to look a little like happiness, the kind that claims you, unbidden. Stay put and let the world show up? Or get out there and be a flĂ¢neur? Which is it? Well, it’s both.
Maybe this is what my father’s third stage of life is about — wondering, rather than pursuing. You look good — meaning, hey, you’re still alive, you’re still here, and for once you don’t really need to have a to-do list.

Wednesday, April 18, 2018

Basal forebrain and default mode network regulation.

The basal forebrain is an ascending, activating, neuromodulatory system involved in wake–sleep regulation, memory formation, and regulation of sensory information processing. Nair et al. show that it also influences (in mice) the default mode brain network that is active (as in mind wandering) when the brain's attention is not directed externally, as during tasks or exploration. They suggest that basal forebrain nuclei might be target regions for up or down regulation during default mode dysfunction during epilepsy or major depressive disorder.


The default mode network (DMN) is a collection of cortical brain regions that is active during states of rest or quiet wakefulness in humans and other mammalian species. A pertinent characteristic of the DMN is a suppression of local field potential gamma activity [~ 40 Hz brain waves] during cognitive task performance as well as during engagement with external sensory stimuli. Conversely, gamma activity is elevated in the DMN during rest. Here, we document that the rat basal forebrain (BF) exhibits the same pattern of responses, namely pronounced gamma oscillations during quiet wakefulness in the home cage and suppression of this activity during active exploration of an unfamiliar environment. We show that gamma oscillations are localized to the BF and that gamma-band activity in the BF has a directional influence on a hub of the rat DMN, the anterior cingulate cortex, during DMN-dominated brain states. The BF is well known as an ascending, activating, neuromodulatory system involved in wake–sleep regulation, memory formation, and regulation of sensory information processing. Our findings suggest a hitherto undocumented role of the BF as a subcortical node of the DMN, which we speculate may be important for switching between internally and externally directed brain states. We discuss potential BF projection circuits that could underlie its role in DMN regulation and highlight that certain BF nuclei may provide potential target regions for up- or down-regulation of DMN activity that might prove useful for treatment of DMN dysfunction in conditions such as epilepsy or major depressive disorder.

Tuesday, April 17, 2018

Scared by the news? Take the long view.

I tell everyone I meet or chat with to read Steven Pinker's new book "Enlightenment Now." It was the subject of a series of recent MindBlog posts, starting on 3/1/18. I want to pass on a few nice quotes from a recent interview of Pinker by David Bornstein, starting with Pinker's answer to "Why did you write the book?"
The first impetus was coming across data sets showing that the state of humanity has been improving. It’s a conclusion one can’t appreciate from the news — because journalism covers the disasters, crises, dangers and injustices that remain. And until the Messiah comes, there will always be enough of them to fill Page One.
Improvements, in contrast, are gradual, and often consist of things that don’t happen — an absence of war, or famine or crime in much of the world. One can spot them only by looking at data, which tally both the occurrences and the non-occurrences. When I came across data showing plunges in extreme poverty, illiteracy, war, violent crime, racism, sexism, homophobia, domestic violence, disease, lethal accidents and just about every other scourge, I thought these improvements deserved to be better known.
...denying progress can make us fatalistic: If all our efforts at improving the human condition have failed, why throw good money after bad? More generally, people are so jaded by the narrative of decline that they can’t think coherently about progress; the concept just doesn’t compute. I’m regularly confronted with an example of something that has gone wrong, like the opioid epidemic or a rampage shooting, as a refutation of progress — as if progress meant that everything gets better for everyone everywhere always. That wouldn’t be progress; that would be magic. Progress consists of solving problems, and problems are inevitable. So of course things can get worse for some people sometimes.
We live longer: Life expectancy at birth worldwide is now 71 years, and in the developed world, 80 years; through most of human existence it was around 30. Global extreme poverty has declined to 9.6 percent of the world population; 200 years ago, it was at 90 percent. In just the last 30 years, extreme poverty has declined by 75 percent — a stupendous achievement that is almost entirely unappreciated. Equally unappreciated is the fact that 90 percent of the world’s population under the age of 25 can read and write, including girls. In most of the history of Europe, no more than 15 percent could read and write, mostly men.
...liberals, ... in joining the chorus of decline have unilaterally disarmed in the fight for judicious regulation and social spending. Take pollution. Since the formation of the Environmental Protection Agency in 1970, air pollution (aside from carbon dioxide) has fallen by 60 percent, even as Americans have become more populous and richer and have driven more miles. But because many liberals today can’t bring themselves to acknowledge progress, they have cleared the field for opponents of regulation like Scott Pruitt to claim that the regulations have done no good and have only cramped our lives and dragged down the economy. Rather than saying “Environmental regulation has improved the environment while allowing the economy to grow,” they have said, ineffectually, “If you oppose regulation, you’re a bad person.”
The same is true with poverty. Ronald Reagan famously wisecracked, “Some years ago, the federal government declared war on poverty, and poverty won.” Few liberals would disagree. But Reagan was mistaken. If you factor in government social spending, such as the earned-income tax credit, rates of poverty have declined significantly. But here again liberals hand their opponents a weapon: the conclusion that all social programs are ineffective.
If I were an editor, I’d impose a rule that before a pundit writes about any putative change or trend, he or she must compare the present to the past. Commentators should be more data-oriented, especially now that data are so much more widely available. Also, I’d put the kibosh on a pernicious journalistic habit: reporting a small reversal in a trend (because it’s “news”) while ignoring the far more numerous year-by-year improvements (because they’re not news). This is journalistic malpractice, because it gives readers an impression that is opposite to reality.
Like many cognitive psychologists, I think that curriculums that aim at de-biasing and statistical literacy should begin early, including an explicit awareness of the fallacies we’re vulnerable to. It should be part of our conventional wisdom never to trust our intuitions about risk and danger, and to try to circumvent them by seeking data and reasoning about them probabilistically.
Some psychologists despair that it’s naĂ¯ve to hope that we can overcome our illusions and biases. But history shows that we can outgrow our collective irrationality. We don’t explain disease by miasmas or evil spirits — most people with a sinus infection take antibiotics rather than calling in an exorcist. We don’t engage in human sacrifice to bribe gods for better weather or victory on the battlefield. Not even the most know-nothing politician today would appeal to astrology. So upgrading our intellectual culture can in fact allow us to outgrow our irrationality and delusions.

Monday, April 16, 2018

Cracking the code relating speech prosody to social judgements.

Ponsot et al. use an ingenious method to determine prosodic prototypes that govern social judgments in speech. Clips from their introduction:
In social encounters with strangers, human beings are able to form high-level social representations from very thin slices of expressive behavior and quickly determine whether the other is a friend or a foe and whether they have the ability to enact their good or bad intentions. While much is already known about how facial features contribute to such evaluations, determinants of social judgments in the auditory modality remain poorly understood.
Anthropologists, linguists, and psychologists have noted regularities of pitch contours in social speech for decades. Notably, patterns of high or rising pitch are associated with social traits such as submissiveness or lack of confidence, and low or falling pitch with dominance or self-confidence, a code that has been proposed to be universal across species. Unfortunately, because these observations stem either from acoustic analysis of a limited number of actor-produced utterances or from the linguistic analysis of small ecological corpora, it has remained difficult to attest of their generality and causality in cognitive mechanisms, and we still do not know what exact pitch contour maximally elicits social percepts.
Inspired by a recent series of powerful data-driven studies in visual cognition in which facial prototypes of social traits were derived from human judgments of thousands of computer-generated visual stimuli, we developed a voice-processing algorithm able to manipulate the temporal pitch dynamics of arbitrary recorded voices in a way that is both fully parametric and realistic and used this technique to generate thousands of novel, natural-sounding variants of the same word utterance, each with a randomly manipulated pitch contour. We then asked human listeners to evaluate the social state of the speakers for each of these manipulated stimuli and reconstructed their mental representation of what speech prosody drives such judgments, using the psychophysical technique of reverse correlation.
Here is their full abstract:
Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker’s traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker’s perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word “Hello,” which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers’ physical characteristics, such as sex and mean pitch. By characterizing how any given individual’s mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals.

Friday, April 13, 2018

Locus Coeruleus integrity and memory in aging adults

The locus coeruleus is a deep brain nucleus whose cells synthesize noradrenaline that is sent via its axonal projections to other parts of the brain. Hämmerer et al. show that its integrity is critical in maintaining memory performance:

Significance
Locus coeruleus (LC) integrity in cognitively normal older adults is a potentially important preclinical marker in dementia. Our study establishes a link between variability in LC integrity and cognitive decline related to noradrenergic modulation in old age. We find that in older adults, reduced LC integrity explains lower memory performance. This effect was more pronounced for memory related to negative events, and accompanied by increased pupil diameter size in response to negative events. The study provides a strong motivation for future research investigating the role of LC integrity in healthy, as well as in pathological, aging.
Abstract
The locus coeruleus (LC) is the principal origin of noradrenaline in the brain. LC integrity varies considerably across healthy older individuals, and is suggested to contribute to altered cognitive functions in aging. Here we test this hypothesis using an incidental memory task that is known to be susceptible to noradrenergic modulation. We used MRI neuromelanin (NM) imaging to assess LC structural integrity and pupillometry as a putative index of LC activation in both younger and older adults. We show that older adults with reduced structural LC integrity show poorer subsequent memory. This effect is more pronounced for emotionally negative events, in accord with a greater role for noradrenergic modulation in encoding salient or aversive events. In addition, we found that salient stimuli led to greater pupil diameters, consistent with increased LC activation during the encoding of such events. Our study presents novel evidence that a decrement in noradrenergic modulation impacts on specific components of cognition in healthy older adults. The findings provide a strong motivation for further investigation of the effects of altered LC integrity in pathological aging.

Thursday, April 12, 2018

A.I. quantifies 100 years of gender and ethnic stereotypes.

Hutson points to work of Garg et al. that uses artificial intelligence to demonstrate how racial and gender stereotypes have changed over time.
They designed their program to use word embeddings, strings of numbers that represent a word’s meaning based on its appearance next to other words in large bodies of text. If people tend to describe women as emotional, for example, “emotional” will appear alongside “woman” more frequently than “man,” and word embeddings will pick that up...Going decade by decade, they found that words related to competence—such as “resourceful” and “clever”—were slowly becoming less masculine. But words related to physical appearance—such as “alluring” and ”homely”—were stuck in time...their embeddings were still distinctly “female.”...Asian names became less tightly linked to terms for outsiders...words related to terrorism became more closely associated with words related to Islam...
Here is the Carg et al. abstract:
Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts—e.g., the women’s movement in the 1960s and Asian immigration into the United States—and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.