Friday, December 29, 2017

When intuition overrides reason.

Gilbert Chin points to work by Walco and Risen showing that a third to a half of us will elect to rely on gut feelings even after having demonstrated an accurate understanding of which choice is more likely to pay off. This suggests that error detection and correction are not coupled (as in Kahneman' dual process model, with system 1's intuitive default decision subject to system 2's determination of accuracy), but rather that detection and correction are not coupled. The abstract:
Will people follow their intuition even when they explicitly recognize that it is irrational to do so? Dual-process models of judgment and decision making are often based on the assumption that the correction of errors necessarily follows the detection of errors. But this assumption does not always hold. People can explicitly recognize that their intuitive judgment is wrong but nevertheless maintain it, a phenomenon known as acquiescence. Although anecdotes and experimental studies suggest that acquiescence occurs, the empirical case for acquiescence has not been definitively established. In four studies—using the ratio-bias paradigm, a lottery exchange game, blackjack, and a football coaching decision—we tested acquiescence using recently established criteria. We provide clear empirical support for acquiescence: People can have a faulty intuitive belief about the world (Criterion 1), acknowledge the belief is irrational (Criterion 2), but follow their intuition nonetheless (Criterion 3)—even at a cost.
(Motivated readers can request a PDF of the article with experimental details from me.)

Thursday, December 28, 2017

On gratitude...

I want to pass on this bit from an essay by Philip Garrity in the New York Times Philosophy Forum "The Stone". On recovering from the vibrancy and trauma of illness he notes:
I notice myself falling back into that same pattern of trying to harness the vibrancy of illness...I am learning, however slowly, that maintaining that level of mental stamina, that fever pitch of experience, is less a recipe for enlightenment, and more for exhaustion.
The existentialist philosopher Jean-Paul Sartre describes our experience as a perpetual transitioning between unreflective consciousness, “living-in-the-world,” and reflective consciousness, “thinking-about-the-world.” Gratitude seems to necessitate an act of reflection on experience, which, in turn, requires a certain abstraction away from that direct experience. Paradoxically, our capacity for gratitude is simultaneously enhanced and frustrated as we strive to attain it.
Perhaps, then, there is an important difference between reflecting on wellness and experiencing wellness. My habitual understanding of gratitude had me forcefully lodging myself into the realm of reflective consciousness, pulling me away from living-in-the-world. I was constantly making an inventory of my wellness, too busy counting the coins to ever spend them.
Gratitude, in the experiential sense, requires that we wade back into the current of unreflective consciousness, which, to the egocentric mind, can easily feel like an annihilation of consciousness altogether. Yet, Sartre says that action that is unreflective isn’t necessarily unconscious. There is something Zen about this, the actor disappearing into the action. It is the way of the artist in the act of creative expression, the musician in the flow of performance. But, to most of us, it is a loss of self — and the sense of competency that comes with it.
If there is any sage in me, he says I must accept the vulnerability of letting the pain fade, of allowing the wounds to heal. Even in the wake of grave illness — or, more unsettlingly, in anticipation of it — we must risk falling back asleep into wellness.

Wednesday, December 27, 2017

Mind the Hype - Mindfulness and Meditation

Smith et al. point to and summarize an article by Van Dam et al. I pass on the Van Dam et al. abstract:
During the past two decades, mindfulness meditation has gone from being a fringe topic of scientific investigation to being an occasional replacement for psychotherapy, tool of corporate well-being, widely implemented educational practice, and “key to building more resilient soldiers.” Yet the mindfulness movement and empirical evidence supporting it have not gone without criticism. Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed. Addressing such concerns, the present article discusses the difficulties of defining mindfulness, delineates the proper scope of research into mindfulness practices, and explicates crucial methodological issues for interpreting results from investigations of mindfulness. For doing so, the authors draw on their diverse areas of expertise to review the present state of mindfulness research, comprehensively summarizing what we do and do not know, while providing a prescriptive agenda for contemplative science, with a particular focus on assessment, mindfulness training, possible adverse effects, and intersection with brain imaging. Our goals are to inform interested scientists, the news media, and the public, to minimize harm, curb poor research practices, and staunch the flow of misinformation about the benefits, costs, and future prospects of mindfulness meditation.
And also Smith et al.'s list of points that seem fairly settled (they provide supporting references):
-Meditation almost certainly does sharpen your attention. 
-Long-term, consistent meditation does seem to increase resiliency to stress. 
-Meditation does appear to increase compassion. It also makes our compassion more effective. 
-Meditation does seem to improve mental health—but it’s not necessarily more effective than other steps you can take. 
-Mindfulness could have a positive impact on your relationships. 
-Mindfulness seems to reduce many kinds of bias. 
-Meditation does have an impact on physical health—but it’s modest.  
-Meditation might not be good for everyone all the time. 
-What kind of meditation is right for you? That depends. 
-How much meditation is enough? That also depends.

Tuesday, December 26, 2017

The 11 separate nations of the United States

I just became aware, through an article by Matthew Speiser in The Independent, of the interesting work of Oolin Woodard that suggests that 11 distinct cultures have historically divided the US. Speiser does capsule descriptions of the nations, given the names Yankedom, New Netherland, The Midlands, Tidewater, Greater Appalachia, Deep South, El Norte, The left Coast, The Far West, New France, and First Nation. They are illustrated by the following graphic from his article:




Monday, December 25, 2017

Autopilots and metastates of our brains.

I pass on summaries from two recent contributions to understanding automatic information processing in our brains. First from Vatansever et al., work showing a role of the default mode network that has been a subject of many MindBlog posts:
Concurrent with mental processes that require rigorous computation and control, a series of automated decisions and actions govern our daily lives, providing efficient and adaptive responses to environmental demands. Using a cognitive flexibility task, we show that a set of brain regions collectively known as the default mode network plays a crucial role in such “autopilot” behavior, i.e., when rapidly selecting appropriate responses under predictable behavioral contexts. While applying learned rules, the default mode network shows both greater activity and connectivity. Furthermore, functional interactions between this network and hippocampal and parahippocampal areas as well as primary visual cortex correlate with the speed of accurate responses. These findings indicate a memory-based “autopilot role” for the default mode network, which may have important implications for our current understanding of healthy and adaptive brain processing.
Also, Vidaurre et al. describe two distinct networks, or metastates, within which the brain cycles.
We address the important question of the temporal organization of large-scale brain networks, finding that the spontaneous transitions between networks of interacting brain areas are predictable. More specifically, the network activity is highly organized into a hierarchy of two distinct metastates, such that transitions are more probable within, than between, metastates. One of these metastates represents higher order cognition, and the other represents the sensorimotor systems. Furthermore, the time spent in each metastate is subject-specific, is heritable, and relates to behavior. Although evidence of non–random-state transitions has been found at the microscale, this finding at the whole-brain level, together with its relation to behavior, has wide implications regarding the cognitive role of large-scale resting-state networks.

Friday, December 22, 2017

Detailed demographics from Google Street Views.

Interesting....neighborhood-level estimates of the racial, economic and political characteristics of 200 U.S. cities using Google Street View images of people's cars. ...From Gebru et al.:
The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time.
From the summary by Ingraham:
...The 22 million vehicles in the Google Street View database comprise roughly 8 percent of all vehicles in the United States...the researchers first paired the Zip code-level vehicle data with numbers on race, income and education from the U.S. Census Bureau'sAmerican Community Survey. They did this for a random 15 percent of the Zip codes in their data set to create a “training set.” They then created another algorithm to go through the training set to see how vehicle characteristics correlated with neighborhood characteristics: What kinds of vehicles are disproportionately likely to appear in white neighborhoods, or black ones? Low-income vs. high-income? Highly-educated areas vs. less-educated ones?
You can do similar exercises for other demographic characteristics, like educational attainment. People with graduate degrees were more likely to drive Audi hatchbacks with high city MPG. Those with less than a high school education, on the other hand, were more likely to drive cars made by U.S. manufacturers in the 1990s.
“We found a strong correlation between our results and ACS [American Community Survey] values for every demographic statistic we examined,” the researchers wrote. They plotted the algorithm's demographic estimates against the actual numbers from the ACS and measured their correlation coefficient: a number from zero (no correlation) to 1 (perfect correlation) that measures how accurately one set of numbers can predict the variation in a separate set of numbers.
At the city level, the algorithm did a particularly good job of predicting the percent of Asians (correlation coefficient of 0.87), blacks (0.82) and whites (0.77). It also predicted median household income (0.82) quite well. On measures of educational attainment, the correlation coefficients ran from about 0.54 to 0.70 — again, not perfect, but fairly impressive accuracy considering the predictions derived solely from auto information and nothing else.

Thursday, December 21, 2017

Some morning Rachmaninoff - Fantasy Piece, in E, Op. 3. No. 3

Here is a Rachmaninoff Fantasy Piece, in E, Op. 3 No. 3, which I recorded last week, continuing to play with using my new iPhone X with a USB Zoom iQ6 condenser microphone in the lightning port for making video recordings that can be edited and then sent directly to YouTube.


Wednesday, December 20, 2017

Wealth inequality as a law of nature.

Here is the abstract from Scheffer et al.,  a bit of work that casts an interesting light on the current Republican tax legislation that significantly accelerates the unequal distribution of wealth in this country, as described nicely by David Leonhardt:

Significance
Inequality is one of the main drivers of social tension. We show striking similarities between patterns of inequality between species abundances in nature and wealth in society. We demonstrate that in the absence of equalizing forces, such large inequality will arise from chance alone. While natural enemies have an equalizing effect in nature, inequality in societies can be suppressed by wealth-equalizing institutions. However, over the past millennium, such institutions have been weakened during periods of societal upscaling. Our analysis suggests that due to the very same mathematical principle that rules natural communities (indeed, a “law of nature”) extreme wealth inequality is inevitable in a globalizing world unless effective wealth-equalizing institutions are installed on a global scale.
Abstract
Most societies are economically dominated by a small elite, and similarly, natural communities are typically dominated by a small fraction of the species. Here we reveal a strong similarity between patterns of inequality in nature and society, hinting at fundamental unifying mechanisms. We show that chance alone will drive 1% or less of the community to dominate 50% of all resources in situations where gains and losses are multiplicative, as in returns on assets or growth rates of populations. Key mechanisms that counteract such hyperdominance include natural enemies in nature and wealth-equalizing institutions in society. However, historical research of European developments over the past millennium suggests that such institutions become ineffective in times of societal upscaling. A corollary is that in a globalizing world, wealth will inevitably be appropriated by a very small fraction of the population unless effective wealth-equalizing institutions emerge at the global level.



Figure - Inequality in society (Left) and nature (Right). The Upper panels illustrate the similarity between the wealth distribution of the world’s 1,800 billionaires (A) (8) and the abundance distribution among the most common trees in the Amazon forest (B) (3). The Lower panels illustrate inequality in nature and society more systematically, comparing the Gini index of wealth in countries (C) and the Gini index of abundance in a large set of natural communities (D). (The Gini index is an indicator of inequality that ranges from 0 for entirely equal distributions to 1 for the most unequal situation. It is a more integrative indicator of inequality than the fraction that represents 50%, but the two are closely related in practice. Surprisingly, Gini indices for our natural communities are quite similar to the Gini indices for wealth distributions of 181 countries.)

Tuesday, December 19, 2017

Skill networks and human capital.

Anderson does an interesting analysis showing that workers who can combine different skills synergistically earn more than other skilled workers. I pass on both the Abstract and the Significance statements:

Significance
The relationship between worker human capital and wages is a question of considerable economic interest. Skills are usually characterized using a one-dimensional measure, such as years of training. However, in knowledge-based production, the interaction between a worker’s skills is also important. Here, we propose a network-based method for characterizing worker skill sets. We construct a human capital network, wherein nodes are skills and two skills are connected if a worker has both or both are required for the same job. We then illustrate the method by analyzing an online freelance labor market, showing that workers with diverse skills earn higher wages and that those who use their diverse skills in combination earn the highest wages of all.
Abstract
We propose a network-based method for measuring worker skills. We illustrate the method using data from an online freelance website. Using the tools of network analysis, we divide skills into endogenous categories based on their relationship with other skills in the market. Workers who specialize in these different areas earn dramatically different wages. We then show that, in this market, network-based measures of human capital provide additional insight into wages beyond traditional measures. In particular, we show that workers with diverse skills earn higher wages than those with more specialized skills. Moreover, we can distinguish between two different types of workers benefiting from skill diversity: jacks-of-all-trades, whose skills can be applied independently on a wide range of jobs, and synergistic workers, whose skills are useful in combination and fill a hole in the labor market. On average, workers whose skills are synergistic earn more than jacks-of-all-trades.

Monday, December 18, 2017

Positive stimuli blur time.

From Roberts et al.:
Anecdotal reports that time “flies by” or “slows down” during emotional events are supported by evidence that the motivational relevance of stimuli influences subsequent duration judgments. Yet it is unknown whether the subjective quality of events as they unfold is altered by motivational relevance. In a novel paradigm, we measured the subjective experience of moment-to-moment visual perception. Participants judged the temporal smoothness of high-approach positive images (desserts), negative images (e.g., of bodily mutilation), and neutral images (commonplace scenes) as they faded to black. Results revealed approach-motivated blurring, such that positive stimuli were judged as smoother and negative stimuli as choppier relative to neutral stimuli. Participants’ ratings of approach motivation predicted perceived fade smoothness after we controlled for low-level stimulus features. Electrophysiological data indicated that approach-motivated blurring modulated relatively rapid perceptual activation. These results indicate that stimulus value influences subjective temporal perceptual acuity; approach-motivating stimuli elicit perception of a “blurred” frame rate characteristic of speeded motion.

Friday, December 15, 2017

Teaching A.I. to explain itself

An awkward feature of the artificial intelligence, or machine learning, algorithms that teach themselves to translate languages, analyze X-ray images and mortgage loans, judge probability of behaviors from faces, etc., is that we are unable to discern exactly what they are doing as they perform these functions. How can we we trust these machine unless they can explain themselves? This issue is the subject of an interesting piece by Cliff Kuang. A few clips from the article:
Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand.
A decade in the making, the European Union’s General Data Protection Regulation finally goes into effect in May 2018. It’s a sprawling, many-tentacled piece of legislation whose opening lines declare that the protection of personal data is a universal human right. Among its hundreds of provisions, two seem aimed squarely at where machine learning has already been deployed and how it’s likely to evolve. Google and Facebook are most directly threatened by Article 21, which affords anyone the right to opt out of personally tailored ads. The next article then confronts machine learning head on, limning a so-called right to explanation: E.U. citizens can contest “legal or similarly significant” decisions made by algorithms and appeal for human intervention. Taken together, Articles 21 and 22 introduce the principle that people are owed agency and understanding when they’re faced by machine-made decisions.
To create a neural net that can reveal its inner workings...researchers...are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision. (The idea is partly inspired by psychological studies of real-life experts like firefighters, who don’t clock in for a shift thinking, These are the 12 rules for fighting fires; when they see a fire before them, they compare it with ones they’ve seen before and act accordingly.) Perhaps the most ambitious of the dozen different projects are those that seek to bolt new explanatory capabilities onto existing deep neural networks. Imagine giving your pet dog the power of speech, so that it might finally explain what’s so interesting about squirrels. Or, as Trevor Darrell, a lead investigator on one of those teams, sums it up, “The solution to explainable A.I. is more A.I.”
... a novel idea for letting an A.I. teach itself how to describe the contents of a picture...create two deep neural networks: one dedicated to image recognition and another to translating languages. ...they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.