This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Tuesday, January 09, 2018
Exercise alters the microbiome, enhancing immune function.
Reynolds points to interesting work by Allen et al. showing that exercise by lean, but not obese, subjects enhances microbial production of short chain fatty acids that suppress tissue irritation and inflammation in the colon as well as the rest of the body. These short fatty acids also boost metabolism and dampen the insulin resistance that is a precursor to diabetes.
Monday, January 08, 2018
Memories are stored by the extracellular matrix surrounding brain cells.
Fascinating work from Thompson et al., who show that degrading the extracellular matrix structure with local injections into visual cortex area 2L of bacterial enzyme chondroitinase ABC can abolish a remote visual fear memory:
Significance
Significance
Perineuronal nets (PNNs), a type of extracellular matrix only found in the central nervous system, wraps tightly around the cell soma and proximal dendrites of a subset of neurons. The PNNs are long-lasting structures that restrict plasticity, making them eligible candidates for memory processing. This work demonstrates that PNNs in the lateral secondary visual cortex (V2L) are essential for the recall of a remote visual fear memory. The results suggest a role of extracellular molecules in storage and retrieval of memories.Abstract
Throughout life animals learn to recognize cues that signal danger and instantaneously initiate an adequate threat response. Memories of such associations may last a lifetime and far outlast the intracellular molecules currently found to be important for memory processing. The memory engram may be supported by other more stable molecular components, such as the extracellular matrix structure of perineuronal nets (PNNs). Here, we show that recall of remote, but not recent, visual fear memories in rats depend on intact PNNs in the secondary visual cortex (V2L). Supporting our behavioral findings, increased synchronized theta oscillations between V2L and basolateral amygdala, a physiological correlate of successful recall, was absent in rats with degraded PNNs in V2L. Together, our findings suggest a role for PNNs in remote memory processing by stabilizing the neural network of the engram.
Friday, January 05, 2018
The narcissism epidemic is dead?
An interesting piece from Wetzel et al., who find no evidence for a commonly reported narcissism epidemic over the past 10-20 years, based on the perception that today’s popular culture encourages individuals to engage in self-inflation:
Figure: Difference between latent means estimated in partial-invariance models as a function of cohort and trait. The means of the 1990s cohort were constrained to 0 for model identification. Mean differences between the 1990s and the 2000s cohorts and between the 1990s and 2010s cohorts can be interpreted as standard deviations. Error bars show ±1 SE of the estimated mean difference.
Are recent cohorts of college students more narcissistic than their predecessors? To address debates about the so-called “narcissism epidemic,” we used data from three cohorts of students (1990s: N = 1,166; 2000s: N = 33,647; 2010s: N = 25,412) to test whether narcissism levels (overall and specific facets) have increased across generations. We also tested whether our measure, the Narcissistic Personality Inventory (NPI), showed measurement equivalence across the three cohorts, a critical analysis that had been overlooked in prior research. We found that several NPI items were not equivalent across cohorts. Models accounting for nonequivalence of these items indicated a small decline in overall narcissism levels from the 1990s to the 2010s (d = −0.27). At the facet level, leadership (d = −0.20), vanity (d = −0.16), and entitlement (d = −0.28) all showed decreases. Our results contradict the claim that recent cohorts of college students are more narcissistic than earlier generations of college students.
Figure: Difference between latent means estimated in partial-invariance models as a function of cohort and trait. The means of the 1990s cohort were constrained to 0 for model identification. Mean differences between the 1990s and the 2000s cohorts and between the 1990s and 2010s cohorts can be interpreted as standard deviations. Error bars show ±1 SE of the estimated mean difference.
Thursday, January 04, 2018
The temporal organization of perception.
Ronconi et al. do interesting work suggesting that whether we perceive two visual stimuli as being separate events or a single event depends on a precise relationship between specific temporal window durations and specific brain oscillations measured by EEG (alpha oscillations (8–10 Hz)and theta oscillations (6–7 Hz):
Incoming sensory input is condensed by our perceptual system to optimally represent and store information. In the temporal domain, this process has been described in terms of temporal windows (TWs) of integration/segregation, in which the phase of ongoing neural oscillations determines whether two stimuli are integrated into a single percept or segregated into separate events. However, TWs can vary substantially, raising the question of whether different TWs map onto unique oscillations or, rather, reflect a single, general fluctuation in cortical excitability (e.g., in the alpha band). We used multivariate decoding of electroencephalography (EEG) data to investigate perception of stimuli that either repeated in the same location (two-flash fusion) or moved in space (apparent motion). By manipulating the interstimulus interval (ISI), we created bistable stimuli that caused subjects to perceive either integration (fusion/apparent motion) or segregation (two unrelated flashes). Training a classifier searchlight on the whole channels/frequencies/times space, we found that the perceptual outcome (integration vs. segregation) could be reliably decoded from the phase of prestimulus oscillations in right parieto-occipital channels. The highest decoding accuracy for the two-flash fusion task (ISI = 40 ms) was evident in the phase of alpha oscillations (8–10 Hz), while the highest decoding accuracy for the apparent motion task (ISI = 120 ms) was evident in the phase of theta oscillations (6–7 Hz). These results reveal a precise relationship between specific TW durations and specific oscillations. Such oscillations at different frequencies may provide a hierarchical framework for the temporal organization of perception.
Wednesday, January 03, 2018
Facebook admits its sociopathic side. Are social media the drug epidemic of our times?
An article by Farhad Manjoo notes that Facebook itself has pointed to work by Shakya and Christakis showing that overall the use of Facebook is negatively associated with well-being. From Shakya and Christakis:
The ongoing debate is a useful one, particularly in light of suggestions that social media are a major drug epidemic of our times, addicting us to “short-term, dopamine-driven feedback loops” that “are destroying how society works.”
For example, a 1-standard-deviation increase in “likes clicked” (clicking “like” on someone else's content), “links clicked” (clicking a link to another site or article), or “status updates” (updating one's own Facebook status) was associated with a decrease of 5%–8% of a standard deviation in self-reported mental health.At the same time, work by Burke and Kraut suggests the effect of Facebook use on well-being depends on whether communication is passive (clicking "like" or on links) or more meaningful and active (actually engaging in back and forth conversations with friends important to the user). The latter use improves people's scores on well-being.
The ongoing debate is a useful one, particularly in light of suggestions that social media are a major drug epidemic of our times, addicting us to “short-term, dopamine-driven feedback loops” that “are destroying how society works.”
Blog Categories:
culture/politics,
happiness,
social cognition,
technology
Tuesday, January 02, 2018
How early stress exposure influences adult decision making.
From Birn et al.:
Individuals who have experienced chronic and high levels of stress during their childhoods are at increased risk for a wide range of behavioral problems, yet the neurobiological mechanisms underlying this association are poorly understood. We measured the life circumstances of a community sample of school-aged children and then followed these children for a decade. Those from the highest and lowest quintiles of childhood stress exposure were invited to return to our laboratory as young adults, at which time we reassessed their life circumstances, acquired fMRI data during a reward-processing task, and tested their judgment and decision making. Individuals who experienced high levels of early life stress showed lower levels of brain activation when processing cues signaling potential loss and increased responsivity when actually experiencing losses. Specifically, those with high childhood stress had reduced activation in the posterior cingulate/precuneus, middle temporal gyrus, and superior occipital cortex during the anticipation of potential rewards; reduced activation in putamen and insula during the anticipation of potential losses; and increased left inferior frontal gyrus activation when experiencing an actual loss. These patterns of brain activity were associated with both laboratory and real-world measures of individuals’ risk taking in adulthood. Importantly, these effects were predicated only by childhood stress exposure and not by current levels of life stress.
Blog Categories:
fear/anxiety/stress,
human development,
self
Monday, January 01, 2018
Pax Technica - the new web of world government
Some clips from an Op-Ed piece by Roger Cohen:
...the puzzle remains. A year of Trump and the world has not veered off a precipice. Is there some 21st century iteration of Adam Smith’s “invisible hand” that explains this equilibrium?
One stab at defining such an invisible force that I find persuasive has been offered by Philip Howard, a professor of Internet Studies at Oxford University. He has coined the term “Pax Technica” to define the vast web of internet-connected devices that, together, create a network of stability.
Just as Smith’s “invisible hand” alluded to the unobservable market forces that lead to equilibrium in a free market, so Howard’s Pax Technica (the title of a book he wrote) evokes the cumulative stabilizing effect of the tens of billions of connected devices forming the Internet of Things (IoT). There is, simply put, too much connection in the world today to allow space for outright destruction, even emanating from Trump.
Implicit in this theory is a radical reordering of the nature of power. Pax Romana, Pax Britannica and Pax Americana depended on the military might of sovereign governments. Pax Technica shifts the source of stability to what Howard calls an “empire of connected things.” National authorities are less influential than supranational connected platforms and the private corporations behind them.
Under Pax Technica, there will be advocates of open systems and closed ones. There will be fierce competitions for influence. There will be nationalist and nativist reactions against the supranational apparent across the world today. There will be a reordering of societies — and possibly their increasing fragmentation — as Facebook traps people in what Chamath Palihapitiya, a former Facebook executive, recently called “short-term, dopamine-driven feedback loops.”
But there will also be the hard-to-measure investment of every owner of those billions of devices in a world of connectedness of which war is the enemy. Trump has come to power at a moment when power is increasingly passing out of the hands of governments. That may be even more reassuring than his incompetence.
Palihapatiya suggested Facebook has forged a world of “no civil discourse, no cooperation, misinformation.” Trump, of course, reflects that. But I think Facebook, on balance, also limits the devastation he can wreak.
Friday, December 29, 2017
When intuition overrides reason.
Gilbert Chin points to work by Walco and Risen showing that a third to a half of us will elect to rely on gut feelings even after having demonstrated an accurate understanding of which choice is more likely to pay off. This suggests that error detection and correction are not coupled (as in Kahneman' dual process model, with system 1's intuitive default decision subject to system 2's determination of accuracy), but rather that detection and correction are not coupled. The abstract:
Will people follow their intuition even when they explicitly recognize that it is irrational to do so? Dual-process models of judgment and decision making are often based on the assumption that the correction of errors necessarily follows the detection of errors. But this assumption does not always hold. People can explicitly recognize that their intuitive judgment is wrong but nevertheless maintain it, a phenomenon known as acquiescence. Although anecdotes and experimental studies suggest that acquiescence occurs, the empirical case for acquiescence has not been definitively established. In four studies—using the ratio-bias paradigm, a lottery exchange game, blackjack, and a football coaching decision—we tested acquiescence using recently established criteria. We provide clear empirical support for acquiescence: People can have a faulty intuitive belief about the world (Criterion 1), acknowledge the belief is irrational (Criterion 2), but follow their intuition nonetheless (Criterion 3)—even at a cost.(Motivated readers can request a PDF of the article with experimental details from me.)
Thursday, December 28, 2017
On gratitude...
I want to pass on this bit from an essay by Philip Garrity in the New York Times Philosophy Forum "The Stone". On recovering from the vibrancy and trauma of illness he notes:
I notice myself falling back into that same pattern of trying to harness the vibrancy of illness...I am learning, however slowly, that maintaining that level of mental stamina, that fever pitch of experience, is less a recipe for enlightenment, and more for exhaustion.
The existentialist philosopher Jean-Paul Sartre describes our experience as a perpetual transitioning between unreflective consciousness, “living-in-the-world,” and reflective consciousness, “thinking-about-the-world.” Gratitude seems to necessitate an act of reflection on experience, which, in turn, requires a certain abstraction away from that direct experience. Paradoxically, our capacity for gratitude is simultaneously enhanced and frustrated as we strive to attain it.
Perhaps, then, there is an important difference between reflecting on wellness and experiencing wellness. My habitual understanding of gratitude had me forcefully lodging myself into the realm of reflective consciousness, pulling me away from living-in-the-world. I was constantly making an inventory of my wellness, too busy counting the coins to ever spend them.
Gratitude, in the experiential sense, requires that we wade back into the current of unreflective consciousness, which, to the egocentric mind, can easily feel like an annihilation of consciousness altogether. Yet, Sartre says that action that is unreflective isn’t necessarily unconscious. There is something Zen about this, the actor disappearing into the action. It is the way of the artist in the act of creative expression, the musician in the flow of performance. But, to most of us, it is a loss of self — and the sense of competency that comes with it.
If there is any sage in me, he says I must accept the vulnerability of letting the pain fade, of allowing the wounds to heal. Even in the wake of grave illness — or, more unsettlingly, in anticipation of it — we must risk falling back asleep into wellness.
Blog Categories:
meditation,
mindfulness,
self,
self help
Wednesday, December 27, 2017
Mind the Hype - Mindfulness and Meditation
Smith et al. point to and summarize an article by Van Dam et al. I pass on the Van Dam et al. abstract:
During the past two decades, mindfulness meditation has gone from being a fringe topic of scientific investigation to being an occasional replacement for psychotherapy, tool of corporate well-being, widely implemented educational practice, and “key to building more resilient soldiers.” Yet the mindfulness movement and empirical evidence supporting it have not gone without criticism. Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed. Addressing such concerns, the present article discusses the difficulties of defining mindfulness, delineates the proper scope of research into mindfulness practices, and explicates crucial methodological issues for interpreting results from investigations of mindfulness. For doing so, the authors draw on their diverse areas of expertise to review the present state of mindfulness research, comprehensively summarizing what we do and do not know, while providing a prescriptive agenda for contemplative science, with a particular focus on assessment, mindfulness training, possible adverse effects, and intersection with brain imaging. Our goals are to inform interested scientists, the news media, and the public, to minimize harm, curb poor research practices, and staunch the flow of misinformation about the benefits, costs, and future prospects of mindfulness meditation.And also Smith et al.'s list of points that seem fairly settled (they provide supporting references):
-Meditation almost certainly does sharpen your attention.
-Long-term, consistent meditation does seem to increase resiliency to stress.
-Meditation does appear to increase compassion. It also makes our compassion more effective.
-Meditation does seem to improve mental health—but it’s not necessarily more effective than other steps you can take.
-Mindfulness could have a positive impact on your relationships.
-Mindfulness seems to reduce many kinds of bias.
-Meditation does have an impact on physical health—but it’s modest.
-Meditation might not be good for everyone all the time.
-What kind of meditation is right for you? That depends.
-How much meditation is enough? That also depends.
Tuesday, December 26, 2017
The 11 separate nations of the United States
I just became aware, through an article by Matthew Speiser in The Independent, of the interesting work of Oolin Woodard that suggests that 11 distinct cultures have historically divided the US. Speiser does capsule descriptions of the nations, given the names Yankedom, New Netherland, The Midlands, Tidewater, Greater Appalachia, Deep South, El Norte, The left Coast, The Far West, New France, and First Nation. They are illustrated by the following graphic from his article:
Monday, December 25, 2017
Autopilots and metastates of our brains.
I pass on summaries from two recent contributions to understanding automatic information processing in our brains. First from Vatansever et al., work showing a role of the default mode network that has been a subject of many MindBlog posts:
Concurrent with mental processes that require rigorous computation and control, a series of automated decisions and actions govern our daily lives, providing efficient and adaptive responses to environmental demands. Using a cognitive flexibility task, we show that a set of brain regions collectively known as the default mode network plays a crucial role in such “autopilot” behavior, i.e., when rapidly selecting appropriate responses under predictable behavioral contexts. While applying learned rules, the default mode network shows both greater activity and connectivity. Furthermore, functional interactions between this network and hippocampal and parahippocampal areas as well as primary visual cortex correlate with the speed of accurate responses. These findings indicate a memory-based “autopilot role” for the default mode network, which may have important implications for our current understanding of healthy and adaptive brain processing.Also, Vidaurre et al. describe two distinct networks, or metastates, within which the brain cycles.
We address the important question of the temporal organization of large-scale brain networks, finding that the spontaneous transitions between networks of interacting brain areas are predictable. More specifically, the network activity is highly organized into a hierarchy of two distinct metastates, such that transitions are more probable within, than between, metastates. One of these metastates represents higher order cognition, and the other represents the sensorimotor systems. Furthermore, the time spent in each metastate is subject-specific, is heritable, and relates to behavior. Although evidence of non–random-state transitions has been found at the microscale, this finding at the whole-brain level, together with its relation to behavior, has wide implications regarding the cognitive role of large-scale resting-state networks.
Friday, December 22, 2017
Detailed demographics from Google Street Views.
Interesting....neighborhood-level estimates of the racial, economic and political characteristics of 200 U.S. cities using Google Street View images of people's cars.
...From Gebru et al.:
The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time.From the summary by Ingraham:
...The 22 million vehicles in the Google Street View database comprise roughly 8 percent of all vehicles in the United States...the researchers first paired the Zip code-level vehicle data with numbers on race, income and education from the U.S. Census Bureau'sAmerican Community Survey. They did this for a random 15 percent of the Zip codes in their data set to create a “training set.” They then created another algorithm to go through the training set to see how vehicle characteristics correlated with neighborhood characteristics: What kinds of vehicles are disproportionately likely to appear in white neighborhoods, or black ones? Low-income vs. high-income? Highly-educated areas vs. less-educated ones?
You can do similar exercises for other demographic characteristics, like educational attainment. People with graduate degrees were more likely to drive Audi hatchbacks with high city MPG. Those with less than a high school education, on the other hand, were more likely to drive cars made by U.S. manufacturers in the 1990s.
“We found a strong correlation between our results and ACS [American Community Survey] values for every demographic statistic we examined,” the researchers wrote. They plotted the algorithm's demographic estimates against the actual numbers from the ACS and measured their correlation coefficient: a number from zero (no correlation) to 1 (perfect correlation) that measures how accurately one set of numbers can predict the variation in a separate set of numbers.
At the city level, the algorithm did a particularly good job of predicting the percent of Asians (correlation coefficient of 0.87), blacks (0.82) and whites (0.77). It also predicted median household income (0.82) quite well. On measures of educational attainment, the correlation coefficients ran from about 0.54 to 0.70 — again, not perfect, but fairly impressive accuracy considering the predictions derived solely from auto information and nothing else.
Thursday, December 21, 2017
Some morning Rachmaninoff - Fantasy Piece, in E, Op. 3. No. 3
Here is a Rachmaninoff Fantasy Piece, in E, Op. 3 No. 3, which I recorded last week, continuing to play with using my new iPhone X with a USB Zoom iQ6 condenser microphone in the lightning port for making video recordings that can be edited and then sent directly to YouTube.
Wednesday, December 20, 2017
Wealth inequality as a law of nature.
Here is the abstract from Scheffer et al., a bit of work that casts an interesting light on the current Republican tax legislation that significantly accelerates the unequal distribution of wealth in this country, as described nicely by David Leonhardt:
Significance
Significance
Inequality is one of the main drivers of social tension. We show striking similarities between patterns of inequality between species abundances in nature and wealth in society. We demonstrate that in the absence of equalizing forces, such large inequality will arise from chance alone. While natural enemies have an equalizing effect in nature, inequality in societies can be suppressed by wealth-equalizing institutions. However, over the past millennium, such institutions have been weakened during periods of societal upscaling. Our analysis suggests that due to the very same mathematical principle that rules natural communities (indeed, a “law of nature”) extreme wealth inequality is inevitable in a globalizing world unless effective wealth-equalizing institutions are installed on a global scale.Abstract
Most societies are economically dominated by a small elite, and similarly, natural communities are typically dominated by a small fraction of the species. Here we reveal a strong similarity between patterns of inequality in nature and society, hinting at fundamental unifying mechanisms. We show that chance alone will drive 1% or less of the community to dominate 50% of all resources in situations where gains and losses are multiplicative, as in returns on assets or growth rates of populations. Key mechanisms that counteract such hyperdominance include natural enemies in nature and wealth-equalizing institutions in society. However, historical research of European developments over the past millennium suggests that such institutions become ineffective in times of societal upscaling. A corollary is that in a globalizing world, wealth will inevitably be appropriated by a very small fraction of the population unless effective wealth-equalizing institutions emerge at the global level.
Figure - Inequality in society (Left) and nature (Right). The Upper panels illustrate the similarity between the wealth distribution of the world’s 1,800 billionaires (A) (8) and the abundance distribution among the most common trees in the Amazon forest (B) (3). The Lower panels illustrate inequality in nature and society more systematically, comparing the Gini index of wealth in countries (C) and the Gini index of abundance in a large set of natural communities (D). (The Gini index is an indicator of inequality that ranges from 0 for entirely equal distributions to 1 for the most unequal situation. It is a more integrative indicator of inequality than the fraction that represents 50%, but the two are closely related in practice. Surprisingly, Gini indices for our natural communities are quite similar to the Gini indices for wealth distributions of 181 countries.)
Tuesday, December 19, 2017
Skill networks and human capital.
Anderson does an interesting analysis showing that workers who can combine different skills synergistically earn more than other skilled workers. I pass on both the Abstract and the Significance statements:
Significance
Significance
The relationship between worker human capital and wages is a question of considerable economic interest. Skills are usually characterized using a one-dimensional measure, such as years of training. However, in knowledge-based production, the interaction between a worker’s skills is also important. Here, we propose a network-based method for characterizing worker skill sets. We construct a human capital network, wherein nodes are skills and two skills are connected if a worker has both or both are required for the same job. We then illustrate the method by analyzing an online freelance labor market, showing that workers with diverse skills earn higher wages and that those who use their diverse skills in combination earn the highest wages of all.Abstract
We propose a network-based method for measuring worker skills. We illustrate the method using data from an online freelance website. Using the tools of network analysis, we divide skills into endogenous categories based on their relationship with other skills in the market. Workers who specialize in these different areas earn dramatically different wages. We then show that, in this market, network-based measures of human capital provide additional insight into wages beyond traditional measures. In particular, we show that workers with diverse skills earn higher wages than those with more specialized skills. Moreover, we can distinguish between two different types of workers benefiting from skill diversity: jacks-of-all-trades, whose skills can be applied independently on a wide range of jobs, and synergistic workers, whose skills are useful in combination and fill a hole in the labor market. On average, workers whose skills are synergistic earn more than jacks-of-all-trades.
Monday, December 18, 2017
Positive stimuli blur time.
From Roberts et al.:
Anecdotal reports that time “flies by” or “slows down” during emotional events are supported by evidence that the motivational relevance of stimuli influences subsequent duration judgments. Yet it is unknown whether the subjective quality of events as they unfold is altered by motivational relevance. In a novel paradigm, we measured the subjective experience of moment-to-moment visual perception. Participants judged the temporal smoothness of high-approach positive images (desserts), negative images (e.g., of bodily mutilation), and neutral images (commonplace scenes) as they faded to black. Results revealed approach-motivated blurring, such that positive stimuli were judged as smoother and negative stimuli as choppier relative to neutral stimuli. Participants’ ratings of approach motivation predicted perceived fade smoothness after we controlled for low-level stimulus features. Electrophysiological data indicated that approach-motivated blurring modulated relatively rapid perceptual activation. These results indicate that stimulus value influences subjective temporal perceptual acuity; approach-motivating stimuli elicit perception of a “blurred” frame rate characteristic of speeded motion.
Friday, December 15, 2017
Teaching A.I. to explain itself
An awkward feature of the artificial intelligence, or machine learning, algorithms that teach themselves to translate languages, analyze X-ray images and mortgage loans, judge probability of behaviors from faces, etc., is that we are unable to discern exactly what they are doing as they perform these functions. How can we we trust these machine unless they can explain themselves? This issue is the subject of an interesting piece by Cliff Kuang. A few clips from the article:
Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand.
A decade in the making, the European Union’s General Data Protection Regulation finally goes into effect in May 2018. It’s a sprawling, many-tentacled piece of legislation whose opening lines declare that the protection of personal data is a universal human right. Among its hundreds of provisions, two seem aimed squarely at where machine learning has already been deployed and how it’s likely to evolve. Google and Facebook are most directly threatened by Article 21, which affords anyone the right to opt out of personally tailored ads. The next article then confronts machine learning head on, limning a so-called right to explanation: E.U. citizens can contest “legal or similarly significant” decisions made by algorithms and appeal for human intervention. Taken together, Articles 21 and 22 introduce the principle that people are owed agency and understanding when they’re faced by machine-made decisions.
To create a neural net that can reveal its inner workings...researchers...are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision. (The idea is partly inspired by psychological studies of real-life experts like firefighters, who don’t clock in for a shift thinking, These are the 12 rules for fighting fires; when they see a fire before them, they compare it with ones they’ve seen before and act accordingly.) Perhaps the most ambitious of the dozen different projects are those that seek to bolt new explanatory capabilities onto existing deep neural networks. Imagine giving your pet dog the power of speech, so that it might finally explain what’s so interesting about squirrels. Or, as Trevor Darrell, a lead investigator on one of those teams, sums it up, “The solution to explainable A.I. is more A.I.”
... a novel idea for letting an A.I. teach itself how to describe the contents of a picture...create two deep neural networks: one dedicated to image recognition and another to translating languages. ...they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.
Thursday, December 14, 2017
Debussy La plus que lente - a first musical offering from Austin Texas.
This is a personal post, a musical offering of the sort I have done on MindBlog in previous years. The Steinway B that I have used since 2002 recently moved with me from Fort Lauderdale, Florida to Austin, Texas, not to the family house I moved back into, but to the larger living room of my son's home which can manage the kind of musical socials I have given for many years. Techie MindBlog readers might be interested in my discovery that the video camera on my iPhone X is better than the Canon video camera I had been using, and that a small USB Zoom iQ6 condenser microphone attached to its Lightning connector gives audio quality comparable to the much larger C1 Studio condenser microphone whose output had to be tediously synchronized with video from the Canon camera stripped of its inferior audio sound track.
Wednesday, December 13, 2017
Digital mass persuasion via psychological targeting.
Sigh...we're heading full-tilt towards a plutocracy which will manipulate the masses via technologies of the sort described by Matz et al.:
Significance
Significance
Building on recent advancements in the assessment of psychological traits from digital footprints, this paper demonstrates the effectiveness of psychological mass persuasion—that is, the adaptation of persuasive appeals to the psychological characteristics of large groups of individuals with the goal of influencing their behavior. On the one hand, this form of psychological mass persuasion could be used to help people make better decisions and lead healthier and happier lives. On the other hand, it could be used to covertly exploit weaknesses in their character and persuade them to take action against their own best interest, highlighting the potential need for policy interventions.Abstract
People are exposed to persuasive communication across many different contexts: Governments, companies, and political parties use persuasive appeals to encourage people to eat healthier, purchase a particular product, or vote for a specific candidate. Laboratory studies show that such persuasive appeals are more effective in influencing behavior when they are tailored to individuals’ unique psychological characteristics. However, the investigation of large-scale psychological persuasion in the real world has been hindered by the questionnaire-based nature of psychological assessment. Recent research, however, shows that people’s psychological characteristics can be accurately predicted from their digital footprints, such as their Facebook Likes or Tweets. Capitalizing on this form of psychological assessment from digital footprints, we test the effects of psychological persuasion on people’s actual behavior in an ecologically valid setting. In three field experiments that reached over 3.5 million individuals with psychologically tailored advertising, we find that matching the content of persuasive appeals to individuals’ psychological characteristics significantly altered their behavior as measured by clicks and purchases. Persuasive appeals that were matched to people’s extraversion or openness-to-experience level resulted in up to 40% more clicks and up to 50% more purchases than their mismatching or unpersonalized counterparts. Our findings suggest that the application of psychological targeting makes it possible to influence the behavior of large groups of people by tailoring persuasive appeals to the psychological needs of the target audiences. We discuss both the potential benefits of this method for helping individuals make better decisions and the potential pitfalls related to manipulation and privacy.
Blog Categories:
culture/politics,
emotions,
future,
futures,
technology
Subscribe to:
Posts (Atom)