Placebo effect refers to beneficial changes induced by the use of inert treatment, such as placebo-induced relief of physical pain and attenuation of negative affect. To date, we know little about whether placebo treatment could facilitate social functioning, a crucial aspect for well-being of a social species. In the present study, we develop and validate a paradigm to induce placebo effects on social trust and approach behavior (social placebo effect), and show robust evidence that placebo treatment promotes trust in others and increases preference for a closer interpersonal distance. We further examine placebo effects in real-life social interaction and show that placebo treatment makes single, but not pair-bonded, males keep closer to an attractive first-met female and perceive less social anxiety in the female. Finally, we show evidence that the effects of placebo treatment on social trust and approach behavior can be as strong as the effect of intranasal administration of oxytocin, a neuropeptide known for its function in facilitating social cognition and behavior. The finding of the social placebo effect extends our understanding of placebo effects on improvement of physical, mental, and social well-being and suggests clinical potentials in the treatment of social dysfunction.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Wednesday, June 06, 2018
Placebo treatment facilitates social trust.
Nasal sprays containing oxytocin have been shown to facilitate pro-social behaviors. Xan et al. have now shown that this effect can be obtained using nasal sprays containing only saline, if subjects are told the spray contains oxytocin and are educated on its expected pro-social effects:
Tuesday, June 05, 2018
The End of Humanism - Homo Deus
I want to pass on a useful precis of two books by Yuval Harari prepared by my colleague Terry Allard for a meeting of the Chaos and Complexity Seminar at the University of Wisconsin, Madison (where I am an emeritus professor and still maintain an office during my summer months away from Austin TX in Madison WI.) Here is his summary of Harari's "A Brief History of Humankind" and "Homo Deus: A Brief History of Tomorrow."
Humanity’s biggest myth? “gaining more power over the world, over the environment, we will be able to make ourselves happier and more satisfied with life. Looking again from a perspective of thousands of years, we have gained enormous power over the world and it doesn’t seem to make people significantly more satisfied than in the stone age.”
On Morality: “we are very close to really having divine powers of creation and destruction. The future of the entire ecological system and the future of the whole of life is really now in our hands. And what to do with it is an ethical question and also a scientific question.”
On Inequality: “With the new revolution in artificial intelligence and biotechnology, there is a danger that again all the power and benefits will be monopolised by a very small elite, and most people will end up worse off than before.”
On timing: “I think that Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different.”
In these 2 volumes, historian Yuval Harari, reviews the successive transformations of humanity and human civilizations from small bands of hunter-gatherers, through the agrarian and industrial revolutions to today’s scientific revolution while reflecting on what it means to be human. Our collective belief in abstract stories like money, corporations, nations and religions enables human cooperation on a large scale and differentiates us from all other animals. Today’s discussion will focus on a possible transition from the humanist values of individual freedoms and “free will” to a disturbing dystopian future where individualism is devalued and people are managed by artificially intelligent systems. This transition is enabled by reductions in Famine, plague and war that have historically motivated human behavior. Further advances in biotechnology, psychology and computer science could produce a superhuman elite having the resources and opportunity to benefit directly from technological enhancements while leaving the majority of humankind behind.Allard's suggested discussion questions:
1. Does technology, social stratification and empire enhance the human experience? Are we happier than hunter-gatherers?Harari quotes from an interview in The Guardian (19March2017):
2. What is humanism?
3. Are people really just the sum of their biological algorithms?
4. When will we trust artificial intelligence? Is AI the inevitable next evolutionary step?
5. What do we (humans) really want the future to be? What are our transcendent values?
Humanity’s biggest myth? “gaining more power over the world, over the environment, we will be able to make ourselves happier and more satisfied with life. Looking again from a perspective of thousands of years, we have gained enormous power over the world and it doesn’t seem to make people significantly more satisfied than in the stone age.”
On Morality: “we are very close to really having divine powers of creation and destruction. The future of the entire ecological system and the future of the whole of life is really now in our hands. And what to do with it is an ethical question and also a scientific question.”
On Inequality: “With the new revolution in artificial intelligence and biotechnology, there is a danger that again all the power and benefits will be monopolised by a very small elite, and most people will end up worse off than before.”
On timing: “I think that Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different.”
Blog Categories:
culture,
culture/politics,
future,
futures,
human evolution,
technology
Monday, June 04, 2018
More on the sociopathy of social media
I want to pass on to MindBlog readers some material from Michael Kaplan, who recently pointed me to his YouTube channel OneHandClap, and in particular his video "How Facebook Makes You Depressed." It notes a December 2016 Facebook article published on their official blog, "Hard Questions: Is Spending Time on Social Media Bad for Us?," and summarizes an assortment of recent research that links social media use to depression. It also explores how social media sites like Facebook make use of addictive neurochemical mechanisms.
If you really want the details on how we are being screwed by the social media, particularly Google and Facebook, read Jaron Lanier's "Ten Arguments for Deleting Your Social Media Accounts Right Now." I downloaded the Kindle version several days ago, and am finding it incredibly sobering reading, given that the platform for mindblog.dericbownds.net is provided by Google (Blogger), posts like this one are automatically sent on from Blogger to my Facebook and Twitter feeds, my piano performances are on Google's YouTube, my email, my calendar, etc., etc...... A few clips from Lanier:
If you really want the details on how we are being screwed by the social media, particularly Google and Facebook, read Jaron Lanier's "Ten Arguments for Deleting Your Social Media Accounts Right Now." I downloaded the Kindle version several days ago, and am finding it incredibly sobering reading, given that the platform for mindblog.dericbownds.net is provided by Google (Blogger), posts like this one are automatically sent on from Blogger to my Facebook and Twitter feeds, my piano performances are on Google's YouTube, my email, my calendar, etc., etc...... A few clips from Lanier:
We’re being tracked and measured constantly, and receiving engineered feedback all the time. We’re being hypnotized little by little by technicians we can’t see, for purposes we don’t know. We’re all lab animals now.
Now everyone who is on social media is getting individualized, continuously adjusted stimuli, without a break, so long as they use their smartphones. What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale.
This book argues in ten ways that what has become suddenly normal— pervasive surveillance and constant, subtle manipulation— is unethical, cruel, dangerous, and inhumane. Dangerous? Oh, yes, because who knows who’s going to use that power, and for what?
The core process that allows social media to make money and that also does the damage to society is behavior modification. Behavior modification entails methodical techniques that change behavioral patterns in animals and people. It can be used to treat addictions, but it can also be used to create them.
(Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now. Henry Holt and Co. Kindle Edition.)Finally, check out "Hands off my data! 15 default privacy settings you should change right now"
Friday, June 01, 2018
How much should A.I. frighten us?
Continuing the artificial intelligence topic of yesterday's post, I want to pass on the concluding paragraphs of a fascinating New Yorker article by Tad Friend. Friend suggest that thinking about artificial intelligence can help clarify what makes us human 0 for better and for worse. He points to several recent books writing about the presumed inevitability of our developing an artificial general intelligence (A.G.I.) that far exceeds our current human capabilities. His final paragraphs:
The real risk of an A.G.I.... may stem not from malice, or emergent self-consciousness, but simply from autonomy. Intelligence entails control, and an A.G.I. will be the apex cogitator. From this perspective, an A.G.I., however well intentioned, would likely behave in a way as destructive to us as any Bond villain. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” Bostrom writes in his 2014 book, “Superintelligence,” a closely reasoned, cumulatively terrifying examination of all the ways in which we’re unprepared to make our masters. A recursive, self-improving A.G.I. won’t be smart like Einstein but “smart in the sense that an average human being is smart compared with a beetle or a worm.” How the machines take dominion is just a detail: Bostrom suggests that “at a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe.” That sounds screenplay-ready—but, ever the killjoy, he notes, “In particular, the AI does not adopt a plan so stupid that even we present-day humans can foresee how it would inevitably fail. This criterion rules out many science fiction scenarios that end in human triumph.”
If we can’t control an A.G.I., can we at least load it with beneficent values and insure that it retains them once it begins to modify itself? Max Tegmark observes that a woke A.G.I. may well find the goal of protecting us “as banal or misguided as we find compulsive reproduction.” He lays out twelve potential “AI Aftermath Scenarios,” including “Libertarian Utopia,” “Zookeeper,” “1984,” and “Self-Destruction.” Even the nominally preferable outcomes seem worse than the status quo. In “Benevolent Dictator,” the A.G.I. “uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that’s really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful.” And more or less indistinguishable from highly immersive video games or a simulation.
Trying to stay optimistic, by his lights—bear in mind that Tegmark is a physicist—he points out that an A.G.I. could explore and comprehend the universe at a level we can’t even imagine. He therefore encourages us to view ourselves as mere packets of information that A.I.s could beam to other galaxies as a colonizing force. “This could be done either rather low-tech by simply transmitting the two gigabytes of information needed to specify a person’s DNA and then incubating a baby to be raised by the AI, or the AI could nanoassemble quarks and electrons into full-grown people who would have all the memories scanned from their originals back on Earth.” Easy peasy. He notes that this colonization scenario should make us highly suspicious of any blueprints an alien species beams at us. It’s less clear why we ought to fear alien blueprints from another galaxy, yet embrace the ones we’re about to bequeath to our descendants (if any).
A.G.I. may be a recurrent evolutionary cul-de-sac that explains Fermi’s paradox: while conditions for intelligent life likely exist on billions of planets in our galaxy alone, we don’t see any. Tegmark concludes that “it appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us.” Therefore, “to program a friendly AI, we need to capture the meaning of life.” Uh-huh.
In the meantime, we need a Plan B. Bostrom’s starts with an effort to slow the race to create an A.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.
So the plan, after we create our own god, would be to bow to it and hope it doesn’t require a blood sacrifice. An autonomous-car engineer named Anthony Levandowski has set out to start a religion in Silicon Valley, called Way of the Future, that proposes to do just that. After “The Transition,” the church’s believers will venerate “a Godhead based on Artificial Intelligence.” Worship of the intelligence that will control us, Levandowski told a Wired reporter, is the only path to salvation; we should use such wits as we have to choose the manner of our submission. “Do you want to be a pet or livestock?” he asked. I’m thinking, I’m thinking . . . ♦
Thursday, May 31, 2018
A.I. needs to learn like a human child.
Matthew Hutson summarizes efforts to nudge machine learning researchers away from the assumption that "computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules." Some clips from his article:
In February, MIT launched Intelligence Quest, a research initiative now raising hundreds of millions of dollars to understand human intelligence in engineering terms. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct. They will boot up following some embedded rules, but will also learn as they go.
Part of the quest will be to discover what babies know and when—lessons that can then be applied to machines. That will take time, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. AI2 recently announced a $125 million effort to develop and test common sense in AI. "We would love to build on the representational structure innate in the human brain," Etzioni says, "but we don't understand how the brain processes language, reasoning, and knowledge."
Harvard University psychologist Elizabeth Spelke has argued that we have at least four "core knowledge" systems giving us a head start on understanding objects, actions, numbers, and space. We are intuitive physicists, for example, quick to understand objects and their interactions...Gary Marcus has composed a minimum list of 10 human instincts that he believes should be baked into AIs, including notions of causality, cost-benefit analysis, and types versus instances (dog versus my dog).
The debate over where to situate an AI on a spectrum between pure learning and pure instinct will continue. But that issue overlooks a more practical concern: how to design and code such a blended machine. How to combine machine learning—and its billions of neural network parameters—with rules and logic isn't clear. Neither is how to identify the most important instincts and encode them flexibly. But that hasn't stopped some researchers and companies from trying.
...researchers are working to inject their AIs with the same intuitive physics that babies seem to be born with. Computer scientists at DeepMind in London have developed what they call interaction networks. They incorporate an assumption about the physical world: that discrete objects exist and have distinctive interactions. Just as infants quickly parse the world into interacting entities, those systems readily learn objects' properties and relationships. Their results suggest that interaction networks can predict the behavior of falling strings and balls bouncing in a box far more accurately than a generic neural network.
Vicarious, a robotics software company in San Francisco, California, is taking the idea further with what it calls schema networks. Those systems, too, assume the existence of objects and interactions, but they also try to infer the causality that connects them. By learning over time, the company's software can plan backward from desired outcomes, as people do. (I want my nose to stop itching; scratching it will probably help.) The researchers compared their method with a state-of-the-art neural network on the Atari game Breakout, in which the player slides a paddle to deflect a ball and knock out bricks. Because the schema network could learn about causal relationships—such as the fact that the ball knocks out bricks on contact no matter its velocity—it didn't need extra training when the game was altered. You could move the target bricks or make the player juggle three balls, and the schema network still aced the game. The other network flailed.
Besides our innate abilities, humans also benefit from something most AIs don't have: a body. To help software reason about the world, Vicarious is "embodying" it so it can explore virtual environments, just as a baby might learn something about gravity by toppling a set of blocks. In February, Vicarious presented a system that looked for bounded regions in 2D scenes by essentially having a tiny virtual character traverse the terrain. As it explored, the system learned the concept of containment, which helps it make sense of new scenes faster than a standard image-recognition convnet that passively surveyed each scene in full. Concepts—knowledge that applies to many situations—are crucial for common sense. "In robotics it's extremely important that the robot be able to reason about new situations," says Dileep George, a co-founder of Vicarious. Later this year, the company will pilot test its software in warehouses and factories, where it will help robots pick up, assemble, and paint objects before packaging and shipping them.
One of the most challenging tasks is to code instincts flexibly, so that AIs can cope with a chaotic world that does not always follow the rules. Autonomous cars, for example, cannot count on other drivers to obey traffic laws. To deal with that unpredictability, Noah Goodman, a psychologist and computer scientist at Stanford University in Palo Alto, California, helps develop probabilistic programming languages (PPLs). He describes them as combining the rigid structures of computer code with the mathematics of probability, echoing the way people can follow logic but also allow for uncertainty: If the grass is wet it probably rained—but maybe someone turned on a sprinkler. Crucially, a PPL can be combined with deep learning networks to incorporate extensive learning. While working at Uber, Goodman and others invented such a "deep PPL," called Pyro. The ride-share company is exploring uses for Pyro such as dispatching drivers and adaptively planning routes amid road construction and game days. Goodman says PPLs can reason not only about physics and logistics, but also about how people communicate, coping with tricky forms of expression such as hyperbole, irony, and sarcasm.
Blog Categories:
future,
futures,
human development,
technology
Wednesday, May 30, 2018
Intelligent brains have more sparse and efficient nerve connections.
From Genç et al.:
Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.
Tuesday, May 29, 2018
The light and dark sides of friendship.
I want to point to two brief reviews by Natalie Angier on friendship. She first points to work of Parkinson et al. showing similarities in the brain activities of friends. The Parkinson et al. abstract:
Human social networks are overwhelmingly homophilous: individuals tend to befriend others who are similar to them in terms of a range of physical attributes (e.g., age, gender). Do similarities among friends reflect deeper similarities in how we perceive, interpret, and respond to the world? To test whether friendship, and more generally, social network proximity, is associated with increased similarity of real-time mental responding, we used functional magnetic resonance imaging to scan subjects’ brains during free viewing of naturalistic movies. Here we show evidence for neural homophily: neural responses when viewing audiovisual movies are exceptionally similar among friends, and that similarity decreases with increasing distance in a real-world social network. These results suggest that we are exceptionally similar to our friends in how we perceive and respond to the world around us, which has implications for interpersonal influence and attraction.Angier also notes work showing that the other side of homophily, or friendship, can be the urge to "otherize" those who differ from you and your friends.
...the study from the University of Michigan had subjects stand outside on a cold winter day and read a brief story about a hiker who was described as either a “left-wing, pro-gay-rights Democrat” or a “right-wing, anti-gay-rights Republican.” When asked whether the hypothetical hiker might feel chilly as well, participants were far more likely to say yes if the protagonist’s political affiliation agreed with their own. But a political adversary — does that person even have skin, let alone a working set of thermal sensors?The abstract of that study:
What people feel shapes their perceptions of others. In the studies reported here, we examined the assimilative influence of visceral states on social judgment. Replicating prior research, we found that participants who were outside during winter overestimated the extent to which other people were bothered by cold (Study 1), and participants who ate salty snacks without water thought other people were overly bothered by thirst (Study 2). However, in both studies, this effect evaporated when participants believed that the other people under consideration held political views opposing their own. Participants who judged these dissimilar others were unaffected by their own strong visceral-drive states, a finding that highlights the power of dissimilarity in social judgment. Dissimilarity may thus represent a boundary condition for embodied cognition and inhibit an empathic understanding of shared out-group pain. Our findings reveal the need for a better understanding of how people’s internal experiences influence their perceptions of the feelings and experiences of those who may hold values different from their own.
Monday, May 28, 2018
Good things about a party drug (Ketamine).
Ketamine, used extensively as an anesthetic during the Vietnam war and also as a party drug, has a rapidly acting antidepressant effect. A recent clinical trial has shown the antidepressant efficacy of esketamine, the nasal-spray form of the club drug ketamine, suggesting that it might be useful for the rapid treatment of suicidal depression. Conventional antidepressants require 4-6 weeks to be effective. Several labs are studying ketamine's mechanism of action. Yang et al. have found that neuronal burst firing in the lateral habenula, which drives robust depressive-like behaviors, is rapidly blocked by local ketamine infusion. Instead of acting on GABAergic neurons as previously suggested, ketamine blocked glutamatergic neurons in the “anti-reward center” lateral habenula to disinhibit downstream dopaminergic and serotonergic neurons. Widman et al. report that ketamine enhances excitability of pyramidal cells indirectly by reducing synaptic GABAergic inhibition, thus causing disinhibition. They show that only those antagonists with antidepressant efficacy in humans disinhibit pyramidal cells at a clinically relevant concentration, supporting the concept that disinhibition is likely involved in the antidepressant effect of these antagonists.
Friday, May 25, 2018
Why we itch more with aging.
Lewis and Grandl offer context and note the significance of work by by Feng et al.:
It is well known that aging is accompanied by the death of specific cell types that function as sensors of outside signals and that this cell death leads to deficits in our ability to detect these signals. For example, age-associated loss of sensory hair cells and/or spiral ganglia neurons in the inner ear leads to progressive hearing loss, particularly of high frequencies. Similarly, death of photoreceptors in the retina of the eye is a key aspect of the pathogenesis of age-related macular degeneration, the leading cause of vision impairment in individuals older than 60 years of age. Feng et al. now identify an unusual link between age-related loss of a sensory cell type and aberrant sensory processing: During aging, the loss of specialized skin cells called Merkel cells results in alloknesis, the pathological sensation of itch in response to innocuous mechanical stimuli...
The finding that Merkel cells normally protect against mechanical itch is notable because it is initially counterintuitive. Whereas in other sensory modalities (for example, vision and hearing), a reduction in sensory cell number as a result of cell death leads to a detrimental reduction in sensation, here, death of Merkel cells leads to an increase in unwanted sensation; that is, an otherwise nonaversive stimulus is perceived as potentially harmful.The Feng et al. abstract:
The somatosensory system relays many signals ranging from light touch to pain and itch. Touch is critical to spatial awareness and communication. However, in disease states, innocuous mechanical stimuli can provoke pathologic sensations such as mechanical itch (alloknesis). The molecular and cellular mechanisms that govern this conversion remain unknown. We found that in mice, alloknesis in aging and dry skin is associated with a loss of Merkel cells, the touch receptors in the skin. Targeted genetic deletion of Merkel cells and associated mechanosensitive Piezo2 channels in the skin was sufficient to produce alloknesis. Chemogenetic activation of Merkel cells protected against alloknesis in dry skin. This study reveals a previously unknown function of the cutaneous touch receptors and may provide insight into the development of alloknesis.
Thursday, May 24, 2018
The microbiome regulates amygdala-dependent fear recall.
Hoban et al. show interaction between the gut microbes and expression of anxiety and fear regulated by the amygdala. Their technical abstract:
The amygdala is a key brain region that is critically involved in the processing and expression of anxiety and fear-related signals. In parallel, a growing number of preclinical and human studies have implicated the microbiome–gut–brain in regulating anxiety and stress-related responses. However, the role of the microbiome in fear-related behaviours is unclear. To this end we investigated the importance of the host microbiome on amygdala-dependent behavioural readouts using the cued fear conditioning paradigm. We also assessed changes in neuronal transcription and post-transcriptional regulation in the amygdala of naive and stimulated germ-free (GF) mice, using a genome-wide transcriptome profiling approach. Our results reveal that GF mice display reduced freezing during the cued memory retention test. Moreover, we demonstrate that under baseline conditions, GF mice display altered transcriptional profile with a marked increase in immediate-early genes (for example, Fos, Egr2, Fosb, Arc) as well as genes implicated in neural activity, synaptic transmission and nervous system development. We also found a predicted interaction between mRNA and specific microRNAs that are differentially regulated in GF mice. Interestingly, colonized GF mice (ex-GF) were behaviourally comparable to conventionally raised (CON) mice. Together, our data demonstrates a unique transcriptional response in GF animals, likely because of already elevated levels of immediate-early gene expression and the potentially underlying neuronal hyperactivity that in turn primes the amygdala for a different transcriptional response. Thus, we demonstrate for what is to our knowledge the first time that the presence of the host microbiome is crucial for the appropriate behavioural response during amygdala-dependent memory retention.
Wednesday, May 23, 2018
Research on subjective well-being.
Given the drum beat of daily negative news we all face, it is useful to be open to facts about longer term trends showing improvement in different areas of life (cf. my series of posts - starting on 4/1/18 - on Pinker's new book, "Enlightenment Now.") In this vein I pass on an open source review article from Nature Human Behavior by Diener et al. describing recent research on subjective well-being. The abstract:
The empirical science of subjective well-being, popularly referred to as happiness or satisfaction, has grown enormously in the past decade. In this Review, we selectively highlight and summarize key researched areas that continue to develop. We describe the validity of measures and their potential biases, as well as the scientific methods used in this field. We describe some of the predictors of subjective well-being such as temperament, income and supportive social relationships. Higher subjective well-being has been associated with good health and longevity, better social relationships, work performance and creativity. At the community and societal levels, cultures differ not only in their levels of well-being but also to some extent in the types of subjective well-being they most value. Furthermore, there are both universal and unique predictors of subjective well-being in various societies. National accounts of subjective well-being to help inform policy decisions at the community and societal levels are now being considered and adopted. Finally we discuss the unknowns in the science and needed future research.
Tuesday, May 22, 2018
Cognitive underpinnings of nationalistic ideology.
From Zmigrod et al.:
Significance
Significance
Belief in rigid distinctions between the nationalistic ingroup and outgroup has been a motivating force in citizens’ voting behavior, as evident in the United Kingdom’s 2016 EU referendum. We found that individuals with strongly nationalistic attitudes tend to process information in a more categorical manner, even when tested on neutral cognitive tasks that are unrelated to their political beliefs. The relationship between these psychological characteristics and strong nationalistic attitudes was mediated by a tendency to support authoritarian, nationalistic, conservative, and system-justifying ideologies. This suggests flexible cognitive styles are related to less nationalistic identities and attitudes.Abstract
Nationalistic identities often play an influential role in citizens’ voting behavior and political engagement. Nationalistic ideologies tend to have firm categories and rules for what belongs to and represents the national culture. In a sample of 332 UK citizens, we tested whether strict categorization of stimuli and rules in objective cognitive tasks would be evident in strongly nationalistic individuals. Using voting behavior and attitudes from the United Kingdom’s 2016 EU referendum, we found that a flexible representation of national identity and culture was linked to cognitive flexibility in the ideologically neutral Wisconsin Card Sorting Test and Remote Associates Test, and to self-reported flexibility under uncertainty. Path analysis revealed that subjective and objective cognitive inflexibility predicted heightened authoritarianism, nationalism, conservatism, and system justification, and these in turn were predictive of support for Brexit and opposition to immigration, the European Union, and free movement of labor. This model accounted for 47.6% of the variance in support for Brexit. Path analysis models were also predictive of participants’ sense of personal attachment to the United Kingdom, signifying that individual differences in cognitive flexibility may contribute toward ideological thinking styles that shape both nationalistic attitudes and personal sense of nationalistic identity. These findings further suggest that emotionally neutral “cold” cognitive information processing—and not just “hot” emotional cognition—may play a key role in ideological behavior and identity.
Monday, May 21, 2018
Brain changes from adolescence to adulthood
Kundu et al. have used a new fMRI imaging technique to look at shifts in functional brain organization in subjects ranging from 8 to 46 years old, finding that localized networks characteristic of youth meld into larger and more functionally distinct networks with maturity. The number of blood oxygenation level-dependent (BOLD) components is halved from adolescence to the fifth decade of life, stabilizing in middle adulthood. The regions driving this change are dorsolateral prefrontal cortices, parietal cortex, and cerebellum. More dynamic regions correlate with skills that are works in progress during adolescence - developing a sense of self, monitoring one's performance, and estimating others' intentions.
Friday, May 18, 2018
Has artificial intelligence become alchemy?
Mathew Hutson describes Ali Rahimi's critique of current artificial intelligence (AI) learning algorithms at a recent AI meeting:
Rahimi charged that machine learning algorithms, in which computers learn through trial and error, have become a form of “alchemy.” Researchers, he said, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another...Without deep understanding of the basic tools needed to build and train new algorithms, he says, researchers creating AIs resort to hearsay, like medieval alchemists. “People gravitate around cargo-cult practices,” relying on “folklore and magic spells,” adds François Chollet, a computer scientist at Google in Mountain View, California. For example, he says, they adopt pet methods to tune their AIs' “learning rates”—how much an algorithm corrects itself after each mistake—without understanding why one is better than others.
Rahimi offers several suggestions for learning which algorithms work best, and when. For starters, he says, researchers should conduct “ablation studies” like those done with the translation algorithm: deleting parts of an algorithm one at a time to see the function of each component. He calls for “sliced analysis,” in which an algorithm's performance is analyzed in detail to see how improvement in some areas might have a cost elsewhere.
Ben Recht, a computer scientist at the University of California, Berkeley, and coauthor of Rahimi's alchemy keynote talk, says AI needs to borrow from physics, where researchers often shrink a problem down to a smaller “toy problem.” “Physicists are amazing at devising simple experiments to root out explanations for phenomena,” he says.
Csaba SzepesvĂ¡ri, a computer scientist at DeepMind in London, says the field also needs to reduce its emphasis on competitive testing. At present, a paper is more likely to be published if the reported algorithm beats some benchmark than if the paper sheds light on the software's inner workings.
Not everyone agrees with Rahimi and Recht's critique. Yann LeCun, Facebook's chief AI scientist in New York City, worries that shifting too much effort away from bleeding-edge techniques toward core understanding could slow innovation and discourage AI's real-world adoption. “It's not alchemy, it's engineering,” he says. “Engineering is messy.”
Thursday, May 17, 2018
Status threat explains 2016 presidential vote
Niraj Chokshi points to a PNAS article by Diana Mutz. Mutz's abstract:
Significance
Significance
Support for Donald J. Trump in the 2016 election was widely attributed to citizens who were “left behind” economically. These claims were based on the strong cross-sectional relationship between Trump support and lacking a college education. Using a representative panel from 2012 to 2016, I find that change in financial wellbeing had little impact on candidate preference. Instead, changing preferences were related to changes in the party’s positions on issues related to American global dominance and the rise of a majority–minority America: issues that threaten white Americans’ sense of dominant group status. Results highlight the importance of looking beyond theories emphasizing changes in issue salience to better understand the meaning of election outcomes when public preferences and candidates’ positions are changing.Abstract
This study evaluates evidence pertaining to popular narratives explaining the American public’s support for Donald J. Trump in the 2016 presidential election. First, using unique representative probability samples of the American public, tracking the same individuals from 2012 to 2016, I examine the “left behind” thesis (that is, the theory that those who lost jobs or experienced stagnant wages due to the loss of manufacturing jobs punished the incumbent party for their economic misfortunes). Second, I consider the possibility that status threat felt by the dwindling proportion of traditionally high-status Americans (i.e., whites, Christians, and men) as well as by those who perceive America’s global dominance as threatened combined to increase support for the candidate who emphasized reestablishing status hierarchies of the past. Results do not support an interpretation of the election based on pocketbook economic concerns. Instead, the shorter relative distance of people’s own views from the Republican candidate on trade and China corresponded to greater mass support for Trump in 2016 relative to Mitt Romney in 2012. Candidate preferences in 2016 reflected increasing anxiety among high-status groups rather than complaints about past treatment among low-status groups. Both growing domestic racial diversity and globalization contributed to a sense that white Americans are under siege by these engines of change.
Wednesday, May 16, 2018
The effort paradox - Effort is both costly and valued
Inzlicht et al. do an interesting review article on how we (as well as some other animals) associate effort with reward, sometimes pursuing objects and outcomes because of the effort they require rather than in spite of it. Effort can increase value retrospectively (as in the IKEA effect, valuing what you build more than what is ready-made), concurrently (as in Flow, enjoyment of energized focus), or prospective (as in effortful versus little effort pro-social work).
Their abstract and list of highlights:
Their abstract and list of highlights:
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort’s role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time.Highlights
Prominent models in the cognitive sciences indicate that mental and physical effort is costly, and that we avoid it. Here, we suggest that this is only half of the story.
Humans and non-human animals alike tend to associate effort with reward and will sometimes select objects or activities precisely because they require effort (e.g., mountain climbing, ultra-marathons).
Effort adds value to the products of effort, but effort itself also has value.
Effort’s value can not only be accessed concurrently with or immediately following effort exertion, but also in anticipation of such expenditure, suggesting that we already have an intuitive understanding of effort’s potential positive value.
If effort is consistently rewarded, people might learn that effort is valuable and become more willing to exert it in general.
Tuesday, May 15, 2018
Transparency is the mother of 'fake news'
Stanley Fish, writing in the NYTimes "The Stone" series suggests that fake news is in large part a product of the enthusiasm for transparency and absolutely free speech. I suggest you read the whole piece. Below are a few clips.
The problem is..
The problem is..
...that information, data and the unbounded flow of more and more speech can be politicized — it can, that is, be woven into a narrative that constricts rather than expands the area of free, rational choice. When that happens — and it will happen often — transparency and the unbounded flow of speech become instruments in the production of the very inequalities (economic, political, educational) that the gospel of openness promises to remove. And the more this gospel is preached and believed, the more that the answer to everything is assumed to be data uncorrupted by interests and motives, the easier it will be for interest and motives to operate under transparency’s cover.
This is so because speech and data presented as if they were independent of any mechanism of selectivity will float free of the standards of judgment that are always the content of such mechanisms. Removing or denying the existence of gatekeeping procedures will result not in a fair and open field of transparency but in a field where manipulation and deception find no obstacles. Because it is an article of their faith that politics are bad and the unmediated encounter with data is good, internet prophets will fail to see the political implications of what they are trying to do, for in their eyes political implications are what they are doing away with.
...human difference is irreducible, and there is no “neutral observation language” (a term of the philosopher Thomas Kuhn’s in his 1962 book “The Structure of Scientific Revolutions”) that can bridge, soften, blur and even erase the differences. When people from opposing constituencies clash there is no common language to which they can refer their differences for mechanical resolution; there are only political negotiations that would involve not truth telling but propaganda, threats, insults, deceptions, exaggerations, insinuations, bluffs, posturings — all the forms of verbal manipulation that will supposedly disappear in the internet nirvana.
They won’t. Indeed, they will proliferate because the politically angled speech that is supposedly the source of our problems is in fact the only possible route to their (no doubt temporary) solution. Speech proceeding from a point of view can at least be recognized as such and then countered. You say, “I know where those guys are coming from, and here are my reasons for believing that we should be coming from some place else” — and dialogue begins. It is dialogue inflected by interests and agendas, but dialogue still. But when speech (or information or data) is just sitting there inert, unattached to any perspective, when there are no guidelines, monitors, gatekeepers or filters, what you have are innumerable bits (like Lego) available for assimilation into any project a clever verbal engineer might imagine; and what you don’t have is any mechanism that can stop or challenge the construction project or even assess it. What you have, in short, are the perfect conditions for the unchecked proliferation of what has come to be called “fake news.”
Blog Categories:
culture/politics,
language,
technology
Monday, May 14, 2018
Industrial revolutions are political and social wrecking balls.
I want to point to yet another impressive bit of research and writing by Thomas Edsall, who gives one of the most clear pictures I have seen of current political and economic changes. Edsall begins with a few quotes from Klaus Schwab at Davos in January 2016 on the bright side of the 'fourth industrial revolution' (cf. my posts on Schwab and Davos on Jan. 28, Jan. 29, and Feb. 9, 2016.) and then the downside. Compared with previous industrial revolutions,
...the fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.And, in a huge understatement:
As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor.Edsall quotes from various authors:
On balance, near-term AI will have the greatest effect on blue collar work, clerical work and other mid-skilled occupations. Given globalization’s effect on the 2016 presidential election, it is worth noting that near-term AI and globalization replace many of the same jobs...Robots, autonomous vehicles, virtual reality, artificial intelligence, machine learning, drones and the Internet of Things are moving ahead rapidly and transforming the way businesses operate and how people earn their livelihoods. For millions who work in occupations like food service, retail sales and truck driving, machines are replacing their jobs.
AI’s near-term effect will not be mass unemployment but occupational polarization resulting in a slowly growing number of persons moving from mid-skilled jobs into lower wage work
The concern is not that robots will take human jobs and render humans unemployable. The traditional economic arguments against that are borne out by centuries of experience...the problem lies in the process of turnover, which could lead to sustained periods of time with a large fraction of people not working...not all workers will have the training or ability to find the new jobs created by AI. Moreover, this “short run” could last for decades and, in fact, the economy could be in a series of “short runs” for even longer.
A populist politician who campaigned on AI-induced job loss would start with ready-made definitions of the "people” and the “elite” based on national fault lines that were sharpened in the 2016 presidential election. This politician also would have a ready-made example of disrespect: the set of highly educated coastal “elites” who make a very good living developing robots to put “the people” out of work.
...both technology and trade seem to drive structural changes which are consequential for voting behavior...Job losses in manufacturing due to automation do create fertile territory for continued populist appeal...In fact, some of the places where Trump made the biggest gains relative to McCain or Romney are in the heartland of heavy manufacturing where robots did lead to losses of manufacturing jobs...David Autor, an economist at M.I.T., examined the political consequences in congressional districts hurt by increased trade with China and found a significant increase in the election of very conservative Republicans.
Rather than directly opposing free trade policies, individuals in import-exposed communities tend to target scapegoats such as immigrants and minorities. This drives support for right-wing candidates, as they compete electorally by targeting out-groups...in areas affected by trade, the scapegoating of immigrants takes place across the board and is not limited to manufacturing workers.
The hard core of Trump’s voters — more than half of all Republican voters don’t just approve of him, but strongly approve — have, in turn, demonstrated a willingness to deify the president no matter what he does or says — a deification dependent in no small part on Trump’s adoption of new communications technologies like Twitter.
The determination of the Trump wing of the Republican Party to profiteer on technologically driven economic and cultural upheaval — and the success of this strategy to date — suggests that the party will continue on its path. For this reason and many others, it is critically important that Democrats develop a more far-reaching understanding of the disruptive, technologically fueled economic and cultural forces that are now shaping American politics — if they intend to steer the country in a more constructive direction, that is.
Friday, May 11, 2018
How not to mind if someone is lying...
Daniel Effron describes the means by which Trump supporters, aware that many of his statements are falsehoods, manage to temper their potential anger.
Mr. Trump’s representatives have used a subtle psychological strategy to defend his falsehoods: They encourage people to reflect on how the falsehoods could have been true.Effron's research has confirmed the effectiveness of this tactic. He asked
...2,783 Americans from across the political spectrum to read a series of claims that they were told (correctly) were false. Some claims, like the falsehood about the inauguration crowd, appealed to Mr. Trump’s supporters, and some appealed to his opponents: for instance, a false report (which circulated widely on the internet) that Mr. Trump had removed a bust of the Rev. Dr. Martin Luther King Jr. from the Oval Office.
All the participants were asked to rate how unethical it was to tell the falsehoods. But half the participants were first invited to imagine how the falsehood could have been true if circumstances had been different. For example, they were asked to consider whether the inauguration would have been bigger if the weather had been nicer, or whether Mr. Trump would have removed the bust if he could have gotten away with it.
The results of the experiments... show that reflecting on how a falsehood could have been true did cause people to rate it as less unethical to tell — but only when the falsehoods seemed to confirm their political views. Trump supporters and opponents both showed this effect.
Again, the problem wasn’t that people confused fact and fiction; virtually everyone recognized the claims as false. But when a falsehood resonated with people’s politics, asking them to imagine counterfactual situations in which it could have been true softened their moral judgments. A little imagination can apparently make a lie feel “truthy” enough to give the liar a bit of a pass.
These results reveal a subtle hypocrisy in how we maintain our political views. We use different standards of honesty to judge falsehoods we find politically appealing versus unappealing. When judging a falsehood that maligns a favored politician, we ask, “Was it true?” and then condemn it if the answer is no.
In this time of “fake news” and “alternative facts,”...Even when partisans agree on the facts, they can come to different moral conclusions about the dishonesty of deviating from those facts. The result is more disagreement in an already politically polarized world.
Thursday, May 10, 2018
More on the vagaries of expertise
To follow up on yesterday's post on the illusion of having skills, I want to point to two other articles in this vein. Herrera notes work showing that:
...in group-work settings, instead of determining whether a given person has genuine expertise we sometimes focus on proxies of expertise — the traits and habits we associate, and often conflate, with expertise. That means qualities such as confidence, extroversion and how much someone talks can outweigh demonstrated knowledge when analyzing whether a person is an expert...In other words, your brain can instinctively trust people simply because they sound as if they know what they’re talking about.And, Gibson reviews the work of Tom Nichols, reflected in his book "The Death of Expertise: The Campaign Against Expertise and Why It Matters." Nichols...
...had begun noticing what he perceived as a new and accelerating—and dangerous—hostility toward established knowledge. People were no longer merely uninformed, Nichols says, but “aggressively wrong” and unwilling to learn. They actively resisted facts that might alter their preexisting beliefs. They insisted that all opinions, however uninformed, be treated as equally serious. And they rejected professional know-how, he says, with such anger. That shook him.
Skepticism toward intellectual authority is bone-deep in the American character, as much a part of the nation’s origin story as the founders’ Enlightenment principles. Overall, that skepticism is a healthy impulse, Nichols believes. But what he was observing was something else, something malignant and deliberate, a collapse of functional citizenship. “Americans have reached a point where ignorance, especially of anything related to public policy, is an actual virtue...To reject the advice of experts is to assert autonomy, a way for Americans to insulate their increasingly fragile egos from ever being told they’re wrong about anything.”
Readers regularly approach Nichols with stories of their own disregarded expertise: doctors, lawyers, plumbers, electricians who’ve gotten used to being second-guessed by customers and clients and patients who know little or nothing about their work. “So many people over the past year have walked up to me and said, ‘You wrote what I was thinking,’” he says.
Subscribe to:
Posts (Atom)