Friday, February 26, 2016

Winning a competition predicts dishonest behavior.

Interesting observations from Schurr and Ritov:

Significance
Competition is prevalent. People often resort to unethical means to win (e.g., the recent Volkswagen scandal). Not surprisingly, competition is central to the study of economics, psychology, sociology, political science, and more. Although we know much about contestants’ behavior before and during competitions, we know little about contestants’ behavior after the competition has ended. Connecting postcompetition behaviors with preceding competition experience, we find that after a competition is over winners behave more dishonestly than losers in an unrelated subsequent task. Furthermore, the subsequent unethical behavior effect seems to depend on winning, rather than on mere success. Providing insight into the issue is important in gaining understanding of how unethical behavior may cascade from exposure to competitive settings.
Abstract
Winning a competition engenders subsequent unrelated unethical behavior. Five studies reveal that after a competition has taken place winners behave more dishonestly than competition losers. Studies 1 and 2 demonstrate that winning a competition increases the likelihood of winners to steal money from their counterparts in a subsequent unrelated task. Studies 3a and 3b demonstrate that the effect holds only when winning means performing better than others (i.e., determined in reference to others) but not when success is determined by chance or in reference to a personal goal. Finally, study 4 demonstrates that a possible mechanism underlying the effect is an enhanced sense of entitlement among competition winners.

Thursday, February 25, 2016

Abnormal cortical folding correlates with trait anxiety.

From Miskovich et al. at the Univ. of Wisconsin, Milwaukee :
Dispositional anxiety is a stable personality trait that is a key risk factor for internalizing disorders, and understanding the neural correlates of trait anxiety may help us better understand the development of these disorders. Abnormal cortical folding is thought to reflect differences in cortical connectivity occurring during brain development. Therefore, assessing gyrification may advance understanding of cortical development and organization associated with trait anxiety. Previous literature has revealed structural abnormalities in trait anxiety and related disorders, but no study to our knowledge has examined gyrification in trait anxiety. We utilized a relatively novel measure, the local gyrification index (LGI), to explore differences in gyrification as a function of trait anxiety. We obtained structural MRI scans using a 3T magnetic resonance scanner on 113 young adults. Results indicated a negative correlation between trait anxiety and LGI in the left superior parietal cortex, specifically the precuneus, reflecting less cortical complexity among those high on trait anxiety. Our findings suggest that aberrations in cortical gyrification in a key region of the default mode network is a correlate of trait anxiety and may reflect disrupted local parietal connectivity.
Inflated and pial surface maps of the left hemisphere demonstrating decreased gyrification in the precuneus as a function of trait anxiety.There was no relationship between anxiety and gyrification in the right hemisphere. Images on the left depict the medial view of the left hemisphere. Images on the right are a view from the top of the right hemisphere and are tilted 30 degrees to provide a better angle for viewing the cluster extent.

Wednesday, February 24, 2016

Moralistic gods enhance sociality.

Purzycki et al. combine laboratory experiments, cross-cultural fieldwork, and analysis of the historical record to propose that belief in judgmental deities was key to the cooperation needed to build and sustain large, complex societies.
Since the origins of agriculture, the scale of human cooperation and societal complexity has dramatically expanded. This fact challenges standard evolutionary explanations of prosociality because well-studied mechanisms of cooperation based on genetic relatedness, reciprocity and partner choice falter as people increasingly engage in fleeting transactions with genetically unrelated strangers in large anonymous groups. To explain this rapid expansion of prosociality, researchers have proposed several mechanisms. Here we focus on one key hypothesis: cognitive representations of gods as increasingly knowledgeable and punitive, and who sanction violators of interpersonal social norms, foster and sustain the expansion of cooperation, trust and fairness towards co-religionist strangers. We tested this hypothesis using extensive ethnographic interviews and two behavioural games designed to measure impartial rule-following among people (n = 591, observations = 35,400) from eight diverse communities from around the world: (1) inland Tanna, Vanuatu; (2) coastal Tanna, Vanuatu; (3) Yasawa, Fiji; (4) Lovu, Fiji; (5) Pesqueiro, Brazil; (6) Pointe aux Piments, Mauritius; (7) the Tyva Republic (Siberia), Russia; and (8) Hadzaland, Tanzania. Participants reported adherence to a wide array of world religious traditions including Christianity, Hinduism and Buddhism, as well as notably diverse local traditions, including animism and ancestor worship. Holding a range of relevant variables constant, the higher participants rated their moralistic gods as punitive and knowledgeable about human thoughts and actions, the more coins they allocated to geographically distant co-religionist strangers relative to both themselves and local co-religionists. Our results support the hypothesis that beliefs in moralistic, punitive and knowing gods increase impartial behaviour towards distant co-religionists, and therefore can contribute to the expansion of prosociality.

Tuesday, February 23, 2016

Creative cognition and brain network dynamics.

Beaty et al. do a review that notes trends in recent neuroimaging studies suggesting that creative cognition involves increased cooperation of the default and executive control networks of our brain (Motivated readers can obtain the article from me.)
Several recent neuroimaging studies have found that creative cognition involves increased cooperation of the default and executive control networks, brain systems linked to self-generated thought and cognitive control.
Default–control network interactions occur during cognitive tasks that involve the generation and evaluation of creative ideas. This pattern of brain network connectivity has been reported across domain-general creative problem solving (e.g., divergent thinking) and domain-specific artistic performance (e.g., poetry composition, musical improvisation, and visual art production).
Default network activity during creative cognition appears to reflect the spontaneous generation of candidate ideas, or potentially useful information derived from long-term memory.
The control network may couple with the default network during idea generation or evaluation to constrain cognition to meet specific task goals.


Dorsolateral Prefrontal Cortex Connectivity During Musical Improvisation. The right dorsolateral prefrontal cortex (DLPFC; green) shows differential connectivity as a function of task goals during musical improvisation in professional pianists. (A) Functional connectivity associated with the goal of using specific sets of piano keys; brain maps show increased coupling between the right DLPFC and motor regions (yellow, e.g., dorsal pre-motor area and the pre-supplementary motor area). (B) Functional connectivity associated with the goal of expressing specific emotions; brain maps show increased coupling between the right DLPFC and default network regions [blue, e.g., medial prefrontal cortex (MPFC), posterior cingulate cortex (PCC), and bilateral inferior parietal lobule (IPL)].

Monday, February 22, 2016

A mindfulness meditation intervention enhances connectivity of brain executive and default modes and lowers inflammation markers.

Creswell et al. recruited 35 stressed-out adult job-seekers, getting half to participate in an intensive three-day mindfulness meditation retreat while the other half completed a three day relaxation retreat program without the mindfulness component. Brain scans and blood samples were obtained before and four months after the program. The result was that mindfulness meditation correlated with reduced blood levels of interleukin-6, a marker of stress and inflammation, and increased functional connectivity between the participants’ resting default mode network and areas in the dorsolateral prefrontal cortex important to attention and executive control. Neither of these changes were seen in participants who received only the relaxation training. The suggestion is that the brain changes cause the decrease in inflammatory markers. Here are some clips of context from their introduction:
Mindfulness meditation training programs, which train receptive attention and awareness to one’s present moment experience, have been shown to improve a broad range of stress-related psychiatric and physical health outcomes in initial randomized controlled trials...recent well-controlled studies indicate that mindfulness meditation training may reduce markers of inflammation (C Reactive Protein, Interleukin-6 (IL-6), neurogenic inflammation) in stressed individuals.
One possibility is that mindfulness meditation training alters resting state functional connectivity (rsFC) of brain networks implicated in mind wandering (the Default Mode Network, DMN) and executive control (the Executive Control Network, EC), which in turn improves emotion regulation, stress resilience, and stress-related health outcomes in at-risk patient populations.... a cross-sectional study (N=25) showed that advanced mindfulness meditation practitioners had increased functional connectivity of a key hub in the default mode network (DMN) (i.e., posterior cingulate cortex) with regions considered to be important in top down executive control (EC) (dorsolateral prefrontal cortex, dorsal ACC), both at rest and during a guided mindfulness meditation practice. This coupling of one’s DMN at rest with regions of the EC network may be important for emotion regulation and stress resilience effects, as greater activation and functional connectivity of EC regions, such as the dlPFC, is associated with reduced pain, negative affect, and stress.
Here we provide the first experimental test of whether an intensive 3-day mindfulness meditation training intervention (relative to a relaxation training intervention) alters DMN connectivity and circulating IL-6 in a high stress unemployed job-seeking community sample. IL-6 is an established clinical health biomarker that is elevated in high stress populations and is associated with elevated cardiovascular disease and mortality risk... unemployment is a well-known chronic stressor that can foster a loss of control, helplessness, and financial setbacks, and unemployment is associated with elevated inflammation. Building on initial cross-sectional evidence (17), we hypothesized that mindfulness meditation training would increase rsFC between the DMN and regions implicated in attention and executive control (dlPFC and dACC). Moreover, we tested whether mindfulness meditation training (relative to relaxation training) decreased circulating IL-6 at 4-month follow up, and whether pre-post intervention increases in DMN-dlPFC rsFC mediated IL-6 improvements at 4-month follow-up.

Friday, February 19, 2016

Forming Beliefs: Why Valence Matters.

Sharot and Garrett do a review article that puts their work on how we see through rose colored glasses, mentioned in a previous MindBlog post, in perspective (Motivated readers can obtain a copy from me).
One of the most salient attributes of information is valence: whether a piece of news is good or bad. Contrary to classic learning theories, which implicitly assume beliefs are adjusted similarly regardless of valence, we review evidence suggesting that different rules and mechanisms underlie learning from desirable and undesirable information. For self-relevant beliefs this asymmetry generates a positive bias, with significant implications for individuals and society. We discuss the boundaries of this asymmetry, characterize the neural system supporting it, and describe how changes in this circuit are related to individual differences in behavior.

Thursday, February 18, 2016

A neural crossroads of psychiatric illnesses as a target for therapeutic brain stimulation

I want pass on this open source article by Downar et al. which contains some useful graphics for illustrating their point about a central role for anterior cingulate cortex and anterior insula in most psychiatric disorders. Here is their abstract:
Recent meta-analyses of structural and functional neuroimaging studies are converging on a collective core of brain regions affected across most psychiatric disorders, centered on the dorsal anterior cingulate cortex (dACC) and anterior insula. These nodes correspond well to an anterior cingulo-insular (aCIN) or ‘salience’ network, and stand at a crossroads within the functional architecture of the brain, acting as a switch to deploy other major functional networks according to motivational demands and environmental constraints. Therefore, disruption of these ‘linchpin’ areas may be disproportionately disabling, even when other networks remain intact. These regions may represent promising targets for a new generation of anatomically directed brain stimulation treatments. Here, we review the potential of the psychiatric core areas as targets for therapeutic brain stimulation in psychiatric disease.

Wednesday, February 17, 2016

Finally...a brain area specialized for music has been found.

Norman-Haignere, Kanwisher, and McDermott have devised a new method to computationally dissect scans from functional magnetic resonance imaging of the brain to reveal an area within the major crevice, or sulcus, of the auditory cortex in the temporal lobe just above the ears that responds to music (any kind of music, drumming, whistling, pop songs, rap, anything melodic or rhythmic) independent of general properties of sound like pitch, spectrotemporal modulation, and frequency. They also found an area for speech not explainable by standard acoustic features.

It is possible that music sensitivity is more ancient and fundamental to the human brain than speech perception, with speech having evolved from music.
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.

Tuesday, February 16, 2016

A 50 year misunderstanding of how we decide to initiate action - our intuition is valid

A commonly accepted assumption (that underlies all of the essays at dericbownds.net and many mindblog posts) is that a line of experiments starting in the early 1980s with Benjamin Libet’s work has demonstrated that our brains initiate an action 300 sec or more before our conscious ‘urge’ to move. It is as if we are ‘late’ to action because separate pathways are initiating the actual movement and our delayed ‘intention’ to move. This counterintuitive paradox has generated vigorous controversy for many years, debate of its implications regarding ‘free will’, etc.

Schurger et al. point out that the ‘readiness potential’ (RP, a slow build-up of scalp electrical potential preceding the onset of subjectively spontaneous voluntary movements ) discovered 50 years ago was presumed to reflect the consequence of decision process in the brain (‘the electro-physiological sign of planning, preparation, and initiation of volitional acts’). A new generation of experiments is now suggesting that brain activity preceding spontaneous voluntary movements (SVMs) "may reflect the ebb and flow of background neuronal noise, rather than the outcome of a specific neural event corresponding to a ‘decision’ to initiate movement... [Several studies] have converged in showing that bounded-integration processes, which involve the accumulation of noisy evidence until a decision threshold is reached, offer a coherent and plausible explanation for the apparent pre-movement build-up of neuronal activity."
SVMs rely on the same neural decision mechanism used for perceptual decisions – integration to bound – except that in this case there is no specific external sensory evidence to integrate. In particular, when actions are initiated spontaneously rather than in response to a sensory cue, the process of integration to bound is dominated by ongoing stochastic fluctuations in neural activity that influence the precise moment at which the decision threshold is reached. In this context, time locking to movement onset means time locking to crests in these temporally autocorrelated background fluctuations, which results in the appearance of a slow, nonlinear build-up in the average. This in turn gives the natural but erroneous impression of a goal-directed brain process corresponding to the ‘cerebral initiation of a spontaneous voluntary act’
The Philosophical Implications:
Many have found Libet et al.’s results so striking because they appear to clash with our commonsense view of action initiation. However, the novel interpretation of the RP that the stochastic decision model provides actually suggests a close correspondence between the two. When one forms an intention to act, one is significantly disposed to act but not yet fully committed. The commitment comes when one finally decides to act. The stochastic decision model reveals a remarkably similar picture on the neuronal level, with the decision to act being a threshold-crossing neural event that is preceded by a neural tendency toward this event.
In addition, dropping the problematic theoretical assumption that a decision to act cannot occur without being conscious also helps to dispel the apparent air of ‘paradox’ surrounding these findings. As with other types of mental occurrence, the decision to initiate an action can occur before one is aware of it. So we can identify the neural initiating event with a decision that we may become aware of only a brief instant later. All this leaves our commonsense picture largely intact.
Finally, distinguishing between the decision to act and the earlier forming of an intention fits well with the distinction between a prediction and a forecast. If our concern is merely forecasting, what is relevant is the less-committed event of an intention's forming, which we identify with the neural tendency. If our concern is prediction, we should focus on the later event of deciding, which we identify with the crossing of the threshold.
Added Note: Those interested in this vein of work might check Schultze-Kraft et al.:
In humans, spontaneous movements are often preceded by early brain signals. One such signal is the readiness potential (RP) that gradually arises within the last second preceding a movement. An important question is whether people are able to cancel movements after the elicitation of such RPs, and if so until which point in time. Here, subjects played a game where they tried to press a button to earn points in a challenge with a brain–computer interface (BCI) that had been trained to detect their RPs in real time and to emit stop signals. Our data suggest that subjects can still veto a movement even after the onset of the RP. Cancellation of movements was possible if stop signals occurred earlier than 200 ms before movement onset, thus constituting a point of no return.

Monday, February 15, 2016

Our eye movements are coupled to our heartbeats.

A fascinating finding by Ohl et al. that the darting about of our eyes (saccades) during visual search is coupled to our heart rate (the R-R interval), proving a powerful influence of body on visuomotor functioning.

ABSTRACT
During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning. 

SIGNIFICANCE STATEMENT
In the present study, we show that microsaccades are coupled to heartbeat. Moreover, we revealed a strong modulation of slow eye movements around the R peak in the ECG. These results suggest that heartbeat as a basic physiological signal is related to statistical modulations of fixational eye movements, in particular, the generation of microsaccades. Therefore, our findings add a new perspective on the principles underlying the generation of fixational eye movements. Importantly, our study highlights the need to record eye movements when studying the influence of heartbeat in neuroscience to avoid misinterpretation of eye-movement-related artifacts as heart-evoked modulations of neural processing.

Friday, February 12, 2016

What makes political leaders influential?

Brinke et al. examine whether politicians who ascend to influence have been driven by Aristotelian or Machiavellian values (i.e., virtue versus vice). Is political influence was to be found in virtuous practices such as temperance, courage, kindness, and humility; or, does it require force, fraud, manipulation, and strategic violence? Hmmmm....where would we place Ted Cruz and Donald Trump with regard to these qualities?
What qualities make a political leader more influential or less influential? Philosophers, political scientists, and psychologists have puzzled over this question, positing two opposing routes to political power—one driven by human virtues, such as courage and wisdom, and the other driven by vices, such as Machiavellianism and psychopathy. By coding nonverbal behaviors displayed in political speeches, we assessed the virtues and vices of 151 U.S. senators. We found that virtuous senators became more influential after they assumed leadership roles, whereas senators who displayed behaviors consistent with vices—particularly psychopathy—became no more influential or even less influential after they assumed leadership roles. Our results inform a long-standing debate about the role of morality and ethics in leadership and have important implications for electing effective government officials. Citizens would be wise to consider a candidate’s virtue in casting their votes, which might increase the likelihood that elected officials will have genuine concern for their constituents and simultaneously promote cooperation and progress in government.

Thursday, February 11, 2016

Linking sustained attention to brain connectivity.

Rosenberg et al. find that strengths of a specific set of brain connections - even when estimated from resting state data collected when subjects are not carrying out any explicit task - can be used to predict a subject's attention ability with high accuracy. A large set of connections is involved during successful attention, and a different large set correlates with lack of attention.
Although attention plays a ubiquitous role in perception and cognition, researchers lack a simple way to measure a person's overall attentional abilities. Because behavioral measures are diverse and difficult to standardize, we pursued a neuromarker of an important aspect of attention, sustained attention, using functional magnetic resonance imaging. To this end, we identified functional brain networks whose strength during a sustained attention task predicted individual differences in performance. Models based on these networks generalized to previously unseen individuals, even predicting performance from resting-state connectivity alone. Furthermore, these same models predicted a clinical measure of attention—symptoms of attention deficit hyperactivity disorder—from resting-state connectivity in an independent sample of children and adolescents. These results demonstrate that whole-brain functional network strength provides a broadly applicable neuromarker of sustained attention.

Functional connections predicting gradCPT performance and ADHD-RS scores. (gradCPT is a test of sustained attention and inhibition that produces a range of behavior across healthy participants, ADHD-RS is a clinical measure of attention deficit hyperactivity disorder.)

Wednesday, February 10, 2016

Everybody's a critic. And that's how it should be.

An article with the title of this post by NYTimes movie critic A.O. Scott is well worth a read. Here are his noble closing sentiments:
...criticism remains an indispensable activity. The making of art — popular or fine, abstruse or accessible, sacred or profane — is one of the glories of our species. We are uniquely endowed with the capacity to fashion representations of the world and our experience in it, to tell stories and draw pictures, to organize sound into music and movement into dance. Just as miraculously, we have the ability, even the obligation, to judge what we have made, to argue about why we are moved, mystified, delighted or bored by any of it. At least potentially, we are all artists. And because we have the ability to recognize and respond to the creativity of others, we are all, at least potentially, critics, too.
...It’s the mission of art to free our minds, and the task of criticism to figure out what to do with that freedom. That everyone is a critic means that we are each capable of thinking against our own prejudices, of balancing skepticism with open-mindedness, of sharpening our dulled and glutted senses and battling the intellectual inertia that surrounds us. We need to put our remarkable minds to use and to pay our own experience the honor of taking it seriously.
The real culture war (the one that never ends) is between the human intellect and its equally human enemies: sloth, cliché, pretension, cant. Between creativity and conformity, between the comforts of the familiar and the shock of the new. To be a critic is to be a soldier in this fight, a defender of the life of art and a champion of the art of living.
It’s not just a job, in other words.

Tuesday, February 09, 2016

Fantasies of the Future

Following up on my Jan. 28 post on Schwab's book "The Fourth Industrial Revolution" I thought I would pass on some speculations in the appendix of the book on the technology shifts enabled by digital connectivity and software technologies that might "fundamentally change society by 2025" (Really? Gimme a break...).

Wearable internet
Examples: A baby tech-enabled onesie which tracks babies' breathing, body movements, sleep patterns and quality and transmits that information in real time to a smartphone app. A Ralph Lauren PoloTech shirt with silver fibers woven directly into the fabric that read heart rate and breathing depth and balance, as well as other key metrics, which are streamed to computer or smartphone via a detachable, Bluetooth-enabled black box. A sensor that collects data about multiple chemicals in body sweat. 
Implantable Technologies - Pacemakers and cochlear implants represent a beginning.
Smart tattoos and other unique chips could help with identification and location. Implanted devices will likely also help to communicate thoughts normally expressed verbally through a “built-in” smart phone, and potentially unexpressed thoughts or moods by reading brainwaves and other signals. (See "Top ten wearables soon to be in your body.")
Vision as the New Interface
Glasses are already on the market today (not just produced by Google) that can: – Allow you to freely manipulate a 3D object, enabling it to be moulded like clay – Provide all the extended live information you need when you see something, in the same way the brain functions – Prompt you with an overlay menu of the restaurant you pass by – Project picture or video on any piece of paper. (see 10 Forthcoming Augmented Reality & Smart Glasses You Can Buy.)
The internet of and for Things
It is economically feasible to connect literally anything to the internet. Intelligent sensors are already available at very competitive prices. All things will be smart and connected to the internet, enabling greater communication and new data-driven services based on increased analytics capabilities...The Ford GT has 10 million lines of computer code in it. And, see The connected home.
I am fatiguing,...the list continues with mention of smart cities, driverless cars, big data for decisions, artificial intelligence for decision making (and the decimation of current white-collar jobs), the sharing economy, 3D printing for manufacturing and health,  personalized medicine, designer humans, neurotechnologies, brain technologies.....etc.

Monday, February 08, 2016

Our digital presence.

Here is an interesting tally: Active users of social media sites compared with the populations of the world's largest countries. if social media sites were countries, Facebook would be the world’s largest country with more active accounts than there are people in China. Twitter would rank 4th with twice the “population” of the USA and Instagram would round out the Top 10.

There are ~7.4 billion people in the world,  ~43% are connected to the internet; ~4 billion, mostly in the developing world, lack internet access. Most pundits expect that by 2025, digital access will have spread to 80% of all people.

Friday, February 05, 2016

Consciousness as the product of carefully balanced chaos.

Tagliafuochi and collaborators have provided more evidence that our experience of consciousness and reality might result from a delicate balance or critical level of connectivity between brain networks in which the brain explores the maximum number of unique pathways to generate meaning. Consciousness slips away if there is too much or too little connectivity. Their abstract:
Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. In this study, we arrive at a basic model explaining these observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness and during the subsequent recovery. We observed that during unconsciousness activity in frontothalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from anatomical constraints. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This framework unifies different observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent from its causes.

Thursday, February 04, 2016

Inequality and facing the future.

A recent piece by Arianna Huffington (The fourth industrial revolution meets the sleep revolution) suggests to me one aspect of yet another driver of future inequality beyond the declining share of income going to labor compared with capital. Two clear castes of people are emerging, those who can adapt psychologically to the mind-numbing complexity of the emerging digital environment by optimizing their minds and bodies (meditation, sleep, exercise, diet, etc.) and those of lower socioeconomic status who start off disadvantaged (see yesterday’s post) and find that their only mental refuge is some form of fundamentalism, a closing rather than opening of their minds (cf. Donald Trump and Ted Cruz supporters.) Given the chaos and disruption being visited on traditional political and economic arrangements by the fusion of digital, biological, and physical advances - the internet of things meeting the smart factory meeting synthetic biology - how are all humans going to be able share in an understanding and shaping of these changes in a way that keeps human beings at the center?

Wednesday, February 03, 2016

The neuroscience of poverty.

This open source review article by Alla Katsnelson is sobering, and worth a read. The major foci in the brain that appear to show disparities in poor children are the hippocampus and frontal lobe. I pass on this graphic illustrating the decline in total brain gray matter (nerve cell) volume in young children of middle and low socioeconomic status individuals.


Tuesday, February 02, 2016

The burden of obesity revisited.

A previous mindblog post has noted the obesity paradox, generated by data suggesting that fat people may live longer. Stokes and Preston,however, note an important distinction that may evaporate the apparent paradox:
Analyses of the relation between obesity and mortality typically evaluate risk with respect to weight recorded at a single point in time. As a consequence, there is generally no distinction made between nonobese individuals who were never obese and nonobese individuals who were formerly obese and lost weight. We introduce additional data on an individual’s maximum attained weight and investigate four models that represent different combinations of weight at survey and maximum weight. We use data from the 1988–2010 National Health and Nutrition Examination Survey, linked to death records through 2011, to estimate parameters of these models. We find that the most successful models use data on maximum weight, and the worst-performing model uses only data on weight at survey. We show that the disparity in predictive power between these models is related to exceptionally high mortality among those who have lost weight, with the normal-weight category being particularly susceptible to distortions arising from weight loss. These distortions make overweight and obesity appear less harmful by obscuring the benefits of remaining never obese. Because most previous studies are based on body mass index at survey, it is likely that the effects of excess weight on US mortality have been consistently underestimated.

Monday, February 01, 2016

The 8 second attention span.

Wow, Egan notes that our average attention span has fallen to eight seconds, down from 12 in the year 2000. (That makes it gratifying that the average amount of time spent by someone on this website is over 4 minutes.)
...a quote from Satya Nadella, the chief executive officer of Microsoft... “The true scarce commodity of the near future will be human attention.”...Putting aside Microsoft’s self-interest in promoting quick-flash digital ads with what may be junk science, there seems little doubt that our devices have rewired our brains. We think in McNugget time. The trash flows, unfiltered, along with the relevant stuff, in an eternal stream. And the last hit of dopamine only accelerates the need for another one.
You see it in the press, the obsession with mindless listicles that have all the staying power of a Popsicle. You see it in our politics, with fear-mongering slogans replacing anything that requires sustained thought. And the collapse of a fact-based democracy, where, for example, 60 percent of Trump supporters believe Obama was born in another country, has to be a byproduct of the pick-and-choose news from the buffet line of our screens.
Egan suggests that gardening and deep reading of biographies are useful antidotes.

Friday, January 29, 2016

Fourth Industrial Revolution?? Maybe not.....

Just after starting my second slog through Schwab's 'the fourth industrial revolution' noted in yesterday's post, I see Paul Krugman's review of 'The Rise and Fall of American Growth' by Robert Gordon. Gordon's magnum opus suggests that Schwab's futurism is overblown. (see also Thomas Edsall's excellent piece on the divide and debate between optimistic and pessimistic economists. Also, see this Slate article that chronicles how many times over the past 75 years the term "fourth industrial revolution" has been fetched up to describe a recent or coming advance.) Some clips from Klugman's review:
...[Gordon] has argued that the I.T. revolution is less important than any one of the five Great Inventions that powered economic growth from 1870 to 1970: electricity, urban sanitation, chemicals and pharmaceuticals, the internal combustion engine and modern communication.
What happened between 1870 and 1940, he argues, and I would agree, is what real transformation looks like. Any claims about current progress need to be compared with that baseline to see how they measure up.
And it’s hard not to agree with him that nothing that has happened since is remotely comparable. Urban life in America on the eve of World War II was already recognizably modern; you or I could walk into a 1940s apartment, with its indoor plumbing, gas range, electric lights, refrigerator and telephone, and we’d find it basically functional. We’d be annoyed at the lack of television and Internet — but not horrified or disgusted.
By contrast, urban Americans from 1940 walking into 1870-style accommodations — which they could still do in the rural South — were indeed horrified and disgusted. Life fundamentally improved between 1870 and 1940 in a way it hasn’t since.
One of Gordon's arguments against the techno-optimists is that:
...genuinely major innovations normally bring about big changes in business practices, in what workplaces look like and how they function. And there were some changes along those lines between the mid-1990s and the mid-2000s — but not much since, which is evidence for Gordon’s claim that the main impact of the I.T. revolution has already happened.
Techno-futurists would argue strongly against this, citing the rise of the sharing economy, entities like Airbnb and Uber, and changes in the workplace from hierarchical to distributed organization.
Gordon suggests that the future is all too likely to be marked by stagnant living standards for most Americans, because the effects of slowing technological progress will be reinforced by a set of “headwinds”: rising inequality, a plateau in education levels, an aging population and more.
It’s a shocking prediction for a society whose self-image, arguably its very identity, is bound up with the expectation of constant progress. And you have to wonder about the social and political consequences of another generation of stagnation or decline in working-class incomes.
Of course, Gordon could be wrong: Maybe we’re on the cusp of truly transformative change, say from artificial intelligence or radical progress in biology (which would bring their own risks). But he makes a powerful case. Perhaps the future isn’t what it used to be.

Thursday, January 28, 2016

Predicting the Future: The Fourth Industrial Revolution

Klaus Schwab, the guy who runs the annual Davos Switzerland World Economic Forum of “very important people” in the world, has generated a book with the title of this year's meeting, “The Fourth Industrial Revolution,” about a future that is both terrifying and optimistic.  Despite its gobbledegook, committee-speak, and bullet points, it provides a lot of fascinating information and is well worth a look. After doing a quick read-through, I’m starting a second more careful read and clicking through to many of the references (the kindle version is handy for this.)
The first industrial revolution spanned from about 1760 to around 1840. Triggered by the construction of railroads and the invention of the steam engine, it ushered in mechanical production. The second industrial revolution, which started in the late 19th century and into the early 20th century, made mass production possible, fostered by the advent of electricity and the assembly line. The third industrial revolution began in the 1960s. It is usually called the computer or digital revolution because it was catalysed by the development of semiconductors, mainframe computing (1960s), personal computing (1970s and 80s) and the internet (1990s). Mindful of the various definitions and academic arguments used to describe the first three industrial revolutions, I believe that today we are at the beginning of a fourth industrial revolution. It began at the turn of this century and builds on the digital revolution. It is characterized by a much more ubiquitous and mobile internet, by smaller and more powerful sensors that have become cheaper, and by artificial intelligence and machine learning.
I am well aware that some academics and professionals consider the developments that I am looking at as simply a part of the third industrial revolution. Three reasons, however, underpin my conviction that a fourth and distinct revolution is underway:
Velocity: Contrary to the previous industrial revolutions, this one is evolving at an exponential rather than linear pace. This is the result of the multifaceted, deeply interconnected world we live in and the fact that new technology begets newer and ever more capable technology.
Breadth and depth: It builds on the digital revolution and combines multiple technologies that are leading to unprecedented paradigm shifts in the economy, business, society, and individually. It is not only changing the “what” and the “how” of doing things but also “who” we are.
Systems Impact: It involves the transformation of entire systems, across (and within) countries, companies, industries and society as a whole.

Wednesday, January 27, 2016

The ascendant candidacy of Donald J. Trump

You should check Steven Rattner's interesting commentary asking what we owe to those on the losing end of globalization:

Percent change in number of US employees in each industry, 2000 to 2015


Percent change in wages for each industry, 2009 to 2015:


Not surprisingly, the shifts shown in these graphs are ending badly politically. The 'losers' form a core of support for both Donald Trump and Ted Cruz.

Axelrod notes:

Mr. Trump has found an audience with Americans disgruntled by the rapid, disorderly change they associate with national decline and their own uncertain prospects. Policies be damned, who better to set things right than the defiant strong man who promises by sheer force of will to make America great again?

Open-seat presidential elections are shaped by perceptions of the style and personality of the outgoing incumbent. Voters rarely seek the replica of what they have. They almost always seek the remedy, the candidate who has the personal qualities the public finds lacking in the departing executive...who among the Republicans is more the antithesis of Mr. Obama than the trash-talking, authoritarian, give-no-quarter Mr. Trump?

Tuesday, January 26, 2016

Social relationships and biomarkers of longevity across our life span.

Yang et al. aggregate a massive amount of data from four large longitudinal surveys to show associations between social relationships physiological markers of health:
Two decades of research indicate causal associations between social relationships and mortality, but important questions remain as to how social relationships affect health, when effects emerge, and how long they last. Drawing on data from four nationally representative longitudinal samples of the US population, we implemented an innovative life course design to assess the prospective association of both structural and functional dimensions of social relationships (social integration, social support, and social strain) with objectively measured biomarkers of physical health (C-reactive protein, systolic and diastolic blood pressure, waist circumference, and body mass index) within each life stage, including adolescence and young, middle, and late adulthood, and compare such associations across life stages. We found that a higher degree of social integration was associated with lower risk of physiological dysregulation in a dose–response manner in both early and later life. Conversely, lack of social connections was associated with vastly elevated risk in specific life stages. For example, social isolation increased the risk of inflammation by the same magnitude as physical inactivity in adolescence, and the effect of social isolation on hypertension exceeded that of clinical risk factors such as diabetes in old age. Analyses of multiple dimensions of social relationships within multiple samples across the life course produced consistent and robust associations with health. Physiological impacts of structural and functional dimensions of social relationships emerge uniquely in adolescence and midlife and persist into old age.

Monday, January 25, 2016

The evolution of dance

From Laland et al.:
Evidence from multiple sources reveals a surprising link between imitation and dance. As in the classical correspondence problem central to imitation research, dance requires mapping across sensory modalities and the integration of visual and auditory inputs with motor outputs. Recent research in comparative psychology supports this association, in that entrainment to a musical beat is almost exclusively observed in animals capable of vocal or motor imitation. Dance has representational properties that rely on the dancers’ ability to imitate particular people, animals or events, as well as the audience’s ability to recognize these correspondences. Imitation also plays a central role in learning to dance and the acquisition of the long sequences of choreographed movements are dependent on social learning. These and other lines of evidence suggest that dancing may only be possible for humans because its performance exploits existing neural circuitry employed in imitation.
This painting by Edgar Degas not only depicts a ballet rehearsal but also illustrates the roles of imitation and synchrony
Clips from the body of the article, with another figure:
...there can be no doubt that, compared to other animals, humans are exceptional imitators. It may be no coincidence that a recent PET scan analysis of the neural basis of dance found that foot movement to music excited regions of neural circuitry (e.g. the right frontal operculum) previously associated with imitation. Dancing may only be possible because its performance exploits the neural circuitry employed in imitation. Such reasoning applies equally where individuals dance alone; unlike much human behavior, dancing inherently seems to require a brain capable of solving the correspondence problem.

Dance often tells a story, and this representational quality provides another link with imitation. For instance, in the astronomical dances of ancient Egypt, priests and priestesses, accompanied by harps and pipes, mimed significant events in the story of a god or imitated cosmic patterns, such as the rhythm of night and day. Africa, Asia, Australasia and Europe all possess long-standing traditions for masked dances, in which the performers portray the character associated with the mask and enact religious stories. Native Americans have many animal dances, such as the Buffalo dance, which was thought to lure buffalo herds close to the village, and the eagle dance, which is a tribute to these revered birds. This tradition continues to the present. In 2009, Rambert (formerly the Rambert Dance Company), a world leader in contemporary dance, marked the bicentenary of Charles Darwin’s birth and 150th anniversary of his seminal work On the Origin of Species by collaborating with one of us (N.C.) to produce Comedy of Change (Figure above), which evoked animal behaviour on stage with spellbinding accuracy. In all such instances, the creation and performance of the dance requires an ability on the part of the dancer to imitate the movements and sounds of particular people, animals, or events. Such dances re-introduce the correspondence problem, as the dancer, choreographer and audience must be able to connect the dancers’ movements to the target phenomenon they represent.

Friday, January 22, 2016

On teaching old dogs new tricks.

Metcalfe et al. find that older healthy adults not only are better than young adults at answering general-information questions in the first place, but also, when they do make a mistake, they are more likely than young adults to correct those errors. Correcting errors is, of course, the quintessential new-learning task: To correct mistakes, one needs to supplant entrenched responses with new ones. The fact that older adults display greater facility at error correction than young adults contravenes the view that aging necessarily produces cognitive rigidity and an inability to learn. Here is their abstract:
Although older adults rarely outperform young adults on learning tasks, in the study reported here they surpassed their younger counterparts not only by answering more semantic-memory general-information questions correctly, but also by better correcting their mistakes. While both young and older adults exhibited a hypercorrection effect, correcting their high-confidence errors more than their low-confidence errors, the effect was larger for young adults. Whereas older adults corrected high-confidence errors to the same extent as did young adults, they outdid the young in also correcting their low-confidence errors. Their event-related potentials point to an attentional explanation: Both groups showed a strong attention-related P3a in conjunction with high-confidence-error feedback, but the older adults also showed strong P3as to low-confidence-error feedback. Indeed, the older adults were able to rally their attentional resources to learn the true answers regardless of their original confidence in the errors and regardless of their familiarity with the answers.

Thursday, January 21, 2016

Several perspectives on the valuation of outgroups.

A recent issue of the Proceedings of the National Academy of Science has two relevant articles:

Keelah et al. show that Americans’ stereotypes about racial groups may actually reflect their stereotypes about these groups’ presumed home ecologies. Harsh and unpredictable (“desperate”) ecologies induce fast strategy behaviors such as impulsivity, whereas resource-sufficient and predictable (“hopeful”) ecologies induce slow strategy behaviors such as future focus.
...when provided with information about a person’s race (but not ecology), individuals’ inferences about blacks track stereotypes of people from desperate ecologies, and individuals’ inferences about whites track stereotypes of people from hopeful ecologies. However, when provided with information about both the race and ecology of others, individuals’ inferences reflect the targets’ ecology rather than their race: black and white targets from desperate ecologies are stereotyped as equally fast life history strategists, whereas black and white targets from hopeful ecologies are stereotyped as equally slow life history strategists. These findings suggest that the content of several predominant race stereotypes may not reflect race, per se, but rather inferences about how one’s ecology influences behavior.
And, Ginges et al. show that thinking from God's perspective decreases biased valuation of the life of a nonbeliever.
Religious belief is often thought to motivate violence because it is said to promote norms that encourage tribalism and the devaluing of the lives of nonbelievers. If true, this should be visible in the multigenerational violent conflict between Palestinians and Israelis which is marked by a religious divide. We conducted experiments with a representative sample of Muslim Palestinian youth (n = 555), examining whether thinking from the perspective of Allah (God), who is the ultimate arbitrator of religious belief, changes the relative value of Jewish Israelis’ lives (compared with Palestinian lives). Participants were presented with variants of the classic “trolley dilemma,” in the form of stories where a man can be killed to save the lives of five children who were either Jewish Israeli or Palestinian. They responded from their own perspective and from the perspective of Allah. We find that whereas a large proportion of participants were more likely to endorse saving Palestinian children than saving Jewish Israeli children, this proportion decreased when thinking from the perspective of Allah. This finding raises the possibility that beliefs about God can mitigate bias against other groups and reduce barriers to peace.
Also, in the journal Psychological Science, Roets et al. consider the case of Singapore, which contradicts:
...numerous empirical studies that have consistently demonstrated the seemingly inextricable link between authoritarianism and negative attitudes about out-groups. Indeed, in the authoritarian mind, minorities are readily perceived as “bad, disruptive, immoral, and deviant” people who do not fit into society... However, what if authoritarians live in a society in which a very strong and established authority most explicitly endorses diversity and multiculturalism, thereby enforcing a social norm that is in direct opposition to authoritarians’ “natural” negative attitudes toward minorities? Over the past 50 years, the Singaporean government (run by the People’s Action Party) has been highly committed to regulating its ethnically diverse society and promoting multiculturalism through a variety of ingenious yet most consequential measures. A prime example is the imposition of strict ethnic quotas in public residential estates
They analyzed data from a questionnaire measuring authoritarianism that was completed by 249 Singaporean students (the target sample; and 245 Belgian students (the comparison group)...the Belgian control group showed the usual negative relationships between authoritarianism and multiculturalism and between authoritarianism and positive attitudes about out-groups, as found in all previous research. In the Singaporean sample, however, there were significant, positive relationships between authoritarianism and multiculturalism and between authoritarianism and positive attitudes about out-groups... [The] results demonstrate that when a strong authority explicitly and relentlessly endorses diversity and multiculturalism, such a perspective can be adopted even (and especially) by people who are intuitively most opposed to diversity.
You might also note the comments of Aaron Wendland on the writings of Emmanuel Levinas, after World War II, on deep-seated and often irrational fear of the “other.”
Levinas’s antihistamine for our allergic reactions involves three things: an appeal to the “infinity” in human beings, a detailed description of face-to-face encounters and an account of a basic hospitality that constitutes humanity.

Wednesday, January 20, 2016

Paying attention to your body can increase resilience to stress

Resilience is the ability to rapidly return to normal, both physically and emotionally, after a stressful event. Reynolds points to work by Haase et al., who provide fMRI evidence that in high resilience individuals brain areas receiving signals from the body become more active during stress and supress signals to brain areas that intensify body arousal. Individuals with lower resilience show reduced attention to bodily signals but greater neural processing to aversive bodily perturbations. Here's the Haase et al. abstract:

Highlights
• Low resilience individuals are less sensitive to body-relevant information. • Low resilience individuals show an exaggerated brain response to an aversive interoceptive stimulus. • This mismatch between attention to and processing of interoceptive afferents may result in poor adaptation in stressful situations.
Abstract
This study examined neural processes of resilience during aversive interoceptive processing. Forty-six individuals were divided into three groups of resilience Low (LowRes), high (HighRes), and normal (NormRes), based on the Connor-Davidson Resilience Scale (2003). Participants then completed a task involving anticipation and experience of loaded breathing during functional magnetic resonance imaging (fMRI) recording. Compared to HighRes and NormRes groups, LowRes self-reported lower levels of interoceptive awareness and demonstrated higher insular and thalamic activation across anticipation and breathing load conditions. Thus, individuals with lower resilience show reduced attention to bodily signals but greater neural processing to aversive bodily perturbations. In low resilient individuals, this mismatch between attention to and processing of interoceptive afferents may result in poor adaptation in stressful situations.

Tuesday, January 19, 2016

Biochemical individuality obsoletes many dietary recommendations and the glycemic index.

An important paper by a Weizmann Institute group has been languishing in my list of potential posts, and Murphy's summary of their work now prods me to pass on their bottom line: the glycemic index, used to rank foods according to their effects on blood sugar, is not really useful, because people differ strikingly in their indivual biochemistries, the way in which they extract energy from different foods. The Weizmann group devised "a machine-learning algorithm that integrates blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota measured...and showed that it accurately predicts personalized postprandial glycemic response to real-life meals." Companies like Nutrigenomix have begun to offer personalized nutrition assessment based on individual genomics. Here is the Weizmann Inst. info:

Highlights
•High interpersonal variability in post-meal glucose observed in an 800-person cohort 
•Using personal and microbiome features enables accurate glucose response prediction 
•Prediction is accurate and superior to common practice in an independent cohort 
•Short-term personalized dietary interventions successfully lower post-meal glucose
Summary
Elevated postprandial blood glucose levels constitute a global epidemic and a major risk factor for prediabetes and type II diabetes, but existing dietary methods for controlling them have limited efficacy. Here, we continuously monitored week-long glucose levels in an 800-person cohort, measured responses to 46,898 meals, and found high variability in the response to identical meals, suggesting that universal dietary recommendations may have limited utility. We devised a machine-learning algorithm that integrates blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota measured in this cohort and showed that it accurately predicts personalized postprandial glycemic response to real-life meals. We validated these predictions in an independent 100-person cohort. Finally, a blinded randomized controlled dietary intervention based on this algorithm resulted in significantly lower postprandial responses and consistent alterations to gut microbiota configuration. Together, our results suggest that personalized diets may successfully modify elevated postprandial blood glucose and its metabolic consequences.

Monday, January 18, 2016

Posthumanism - the quantification craze and the death of beauty

An eloquent recent essay, “Among the Disrupted” by Leon Wieseltier (pointed to by a Brooks Op-Ed piece), is worth your attention. It opens with a screed on the how journalism has degenerated into a “twittering cacophony of one-liners and promotional announcements” with its force of expression diminishing as its frequency grows, and how culture is being degraded by “the idolatry of metrics and quantification applied to things that cannot be captured by numbers.“ Below are a few clips... Here is a core sentiment:
The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy. So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
Actually, I think the sentiments in the above paragraph are misguided (is Wieseltier really a classical vitalist?), but that's possible grist for another post. Continuing....
...the worldview that is ascendant may be described as posthumanism.
…what is humanism?…The most common understanding of humanism is that it denotes a pedagogy and a worldview. ..The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality; a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion.
Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
…machines may be more neutral about their uses than the propagandists and the advertisers want us to believe. We can leave aside the ideology of digitality and its aggressions, and regard the devices as simply new means for old ends. Tradition “travels” in many ways. It has already flourished in many technologies — but only when its flourishing has been the objective. I will give an example from the humanities. The day is approaching when the dream of the democratization of knowledge — Borges’s fantasy of “the total library” — will be realized. Soon all the collections in all the libraries and all the archives in the world will be available to everyone with a screen. Who would not welcome such a vast enfranchisement? But universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter. Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons. The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence…In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter. Never mind the platforms. Our solemn responsibility is for the substance.

Friday, January 15, 2016

Homosexuality as a discrete class.

Norris et al. contribute to previous work engaging the question of whether homosexuality has a taxonic structure of categories of individuals with distinct orientation, or whether sexual orientation lies on the sort of continuum suggested by Kinsey and others. Because individuals who report nonheterosexual identities, behavior, and attractions are more likely than heterosexual individuals to meet criteria for a psychiatric disorder their study utilized a National Epidemiologic Survey on Alcohol and Related Conditions. The survey was conducted through personal interviews with one randomly selected adult in each household. Their abstract:
Previous research on the latent structure of sexual orientation has returned conflicting results, with some studies finding a dimensional structure (i.e., ranging quantitatively along a spectrum) and others a taxonic structure (i.e., categories of individuals with distinct orientations). The current study used a sample (N = 33,525) from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). A series of taxometric analyses were conducted using three indicators of sexual orientation: identity, behavior, and attraction. These analyses, performed separately for women and men, revealed low-base-rate same-sex-oriented taxa for men (base rate = 3.0% of those sampled) and women (base rate = 2.7%). Generally, taxon membership conferred an increased risk for psychiatric and substance-use disorders. Although taxa were present for men and women, women demonstrated greater sexual fluidity, such that any level of same-sex sexuality conferred taxon membership for men but not for women.

Another installment on anti-aging chemistry - pterostilbene

I thought I would report on my recent meandering into recent work on the putative anti-aging compounds Resveratrol and Pterostilbene, which are found in some fruits, nuts, and vegetables. (I reported my experience with resveratrol in a 2008 MindBlog post.) The meandering started with a glance at Weintraub's piece on health benefits of red wine versus grape juice. She quoted M.I.T. anti-aging researcher Leonard Guarente, who started the company Elysium, which sells Pterostilbene, a resveratrol cousin that is more easily absorbed after oral ingestion:


Poulose et al. have recently reviewed articles on the effects of pterostilbene and resveratrol on brain health and chemistry (motivated readers can obtain a PDF from me, see also the article by Mitteldorf and the website examine.com).

Even after the neutral to negative experience with resveratrol that I reported in the MindBlog post mentioned above, I've just ordered a bit of pterostilbene to see if I experience some of the cognitive effects claimed.  I'll report on the experience.

Thursday, January 14, 2016

How to be bad together - Punishing pro-social behavior.

I pass on part of a brief essay by Gloria Origgi noting the work of Gächter and Herrmann, who examined positive and negative reciprocity in different groups selected within Switzerland and Russia.:
There is a vast literature showing how direct and indirect reciprocity are important tools for dealing with human cooperation. Many experiments have shown that people use “altruistic punishment” to sustain cooperation, that is, they are willingly to pay without receiving anything back just in order to sanction those who don’t cooperate, and hence promote pro-social behavior.
Yet, Gächter and Herrmann showed in their surprising paper that in some cultures, when people were tested in cooperative games (such as the “public good game”), the people who cooperated were punished, rather than the free-riders.
In some societies, people prefer to act anti-socially and they take actions to make sure that the others do the same! This means that cooperation in societies is not always for the good: you can find cartels of anti-social people who don’t care at all for the common good and prefer to cooperate for keeping a status quo that suits them even if the collective outcome is a mediocre result.
As an Italian with first-hand experience in living in a country where, if you behave well, you are socially and legally sanctioned, this news was exciting, even inspiring … perhaps cooperation is not an inherent virtue of the human species. Perhaps, in many circumstances, we prefer to stay with those who share our selfishness and weaknesses and to avoid pro-social altruistic individuals. Perhaps it's not abnormal to live outside a circle of empathy.
So what’s the scientific “news that stays news”: Cooperation for the collective worse is as widespread as cooperation for a better society!

Wednesday, January 13, 2016

Purity, Disgust and Donald Trump

Following the thread of the past few posts (on the partisan divide and empathy), I want to pass on this must-read article by Thomas Edsall, who summarizes work of academic researchers probing the role of purity/disgust, order/chaos, anger and fear, in the electorate. Here is a great graphic from the article:


Tuesday, January 12, 2016

Our strongest prejudice - partisan hostility

Jonathan Haidt points to the fascinating work by political scientists Shanto Iyengar and Sean Westwood, titled “Fear and Loathing Across Party Lines: New Evidence on Group Polarization.”, who found - in four studies designed to reveal prejudice based on race, gender, religion, or political party or ideology - that cross-partisan prejudice was the largest. For white participants who identified with a party, the cross-partisan effect was about 50 percent larger than the cross-race effect. Haidt points out that This is extremely bad news for America because:
...rising cross-partisan hostility means that Americans increasingly see the other side not just as wrong but as evil, as a threat to the very existence of the nation, according to Pew Research. Americans can expect rising polarization, nastiness, paralysis, and governmental dysfunction for a long time to come...This is extremely bad news for science and universities because universities are usually associated with the left...we can expect increasing hostility from Republican legislators toward universities and the things they desire, including research funding and freedom from federal and state control...This is a warning for the rest of the world because some of the trends that have driven America to this point are occurring in many other countries, including: rising education and individualism (which make people more ideological), rising immigration and ethnic diversity (which reduces social capital and trust), and stagnant economic growth (which puts people into a zero-sum mindset).
The situation is made worse by the "motive attribution asymmetry" that I have referenced in a previous post. Both sides of a political divide attribute their own aggressive behavior to love, but the opposite side's to hatred. Millions of Americans believe that their side is basically benevolent while the other side is evil and out to get them.

Monday, January 11, 2016

How learning shapes the empathic brain.

From Hein et al.:

Abstract
Deficits in empathy enhance conflicts and human suffering. Thus, it is crucial to understand how empathy can be learned and how learning experiences shape empathy-related processes in the human brain. As a model of empathy deficits, we used the well-established suppression of empathy-related brain responses for the suffering of out-groups and tested whether and how out-group empathy is boosted by a learning intervention. During this intervention, participants received costly help equally often from an out-group member (experimental group) or an in-group member (control group). We show that receiving help from an out-group member elicits a classical learning signal (prediction error) in the anterior insular cortex. This signal in turn predicts a subsequent increase of empathy for a different out-group member (generalization). The enhancement of empathy-related insula responses by the neural prediction error signal was mediated by an establishment of positive emotions toward the out-group member. Finally, we show that surprisingly few positive learning experiences are sufficient to increase empathy. Our results specify the neural and psychological mechanisms through which learning interacts with empathy, and thus provide a neurobiological account for the plasticity of empathic reactions.

Friday, January 08, 2016

Virtual reality going mainstream will enhance our understanding of consciousness.

These clips from Thomas Metzinger are fascinating:
2016 will be the year in which VR finally breaks through at the mass consumer level. What is more, users will soon be enabled to toggle between virtual, augmented, and substitutional reality, experiencing virtual elements intermixed with their “actual” physical environment or an omnidirectional video feed giving them the illusion of being in a different location in space and/or time, while insight may not always be preserved. Oculus Rift, Zeiss VR One, Sony PlayStation VR, HTC Vive, Samsung’s Galaxy Gear VR or Microsoft’s HoloLens are just the very beginning...
The real news, however, may be that the general public will gradually acquire a new and intuitive understanding of what their very own conscious experience really is and what it always has been. VR is the representation of possible worlds and possible selves, with the aim of making them appear as real as possible—ideally, by creating a subjective sense of “presence” in the user. Interestingly, some of our best theories of the human mind and conscious experience describe it in a very similar way: Leading theoretical neurobiologists like Karl Friston and eminent philosophers like Jakob Hohwy and Andy Clark describe it as the constant creation of internal models of the world, virtual neural representations of reality which express probability density functions and work by continuously generating hypotheses about the hidden causes of sensory input, minimizing their prediction error. In 1995, Finnish philosopher Antti Revonsuo already pointed out how conscious experience exactly is a virtual model of the world, a dynamic internal simulation, which in standard situations cannot be experienced as a virtual model because it is phenomenally transparent—we “look through it” as if we were in direct and immediate contact with reality. What is historically new, and what creates not only novel psychological risks but also entirely new ethical and legal dimensions, is that one virtual reality gets ever more deeply embedded into another virtual reality: The conscious mind of human beings, which has evolved under very specific conditions and over millions of years, now gets causally coupled and informationally woven into technical systems for representing possible realities. Increasingly, consciousness is not only culturally and socially embedded, but also shaped by a specific technological niche that, over time, quickly acquires rapid, autonomous dynamics and ever new properties. This creates a complex convolution, a nested form of information flow in which the biological mind and its technological niche influence each other in ways we are just beginning to understand.

Thursday, January 07, 2016

How our brains change during the day.

I am reminded what a rigid daily schedule my body keeps and expects every time I vary my routine slightly (changing a meal time, exercise, happy hour, bedtime)...my body doesn't like it, feels off kilter. Travel of the sort I've been doing over the past month is a huge disrupter. I'm also clear that any demanding and analytical thinking I might want to do should happen before noontime, because by 4 p.m. (a low blood sugar time for the body), my mind has become very lazy.

McClung and collaborators have done the fascinating experiment of looking at gene expression during the day in brain areas important in learning, memory, and emotion, in 146 young and old brains, finding differences on aging. The brains were from subjects who had died suddenly, as in a car accident. They built on the work of Akil and collaborators who earlier had shown the activity of ~1000 genes varies in a daily pattern that allows the time of death to be predicted within an hour. That pattern was disrupted in people with major depressive disorders. From McClung's group:  

Significance
Circadian rhythms are important in nearly all processes in the brain. Changes in rhythms that come with aging are associated with sleep problems, problems with cognition, and nighttime agitation in elderly people. In this manuscript, we identified transcripts genome-wide that have a circadian rhythm in expression in human prefrontal cortex. Moreover, we describe how these rhythms are changed during normal human aging. Interestingly, we also identified a set of previously unidentified transcripts that become rhythmic only in older individuals. This may represent a compensatory clock that becomes active with the loss of canonical clock function. These studies can help us to develop therapies in the future for older people who suffer from cognitive problems associated with a loss of normal rhythmicity.
Abstract
With aging, significant changes in circadian rhythms occur, including a shift in phase toward a “morning” chronotype and a loss of rhythmicity in circulating hormones. However, the effects of aging on molecular rhythms in the human brain have remained elusive. Here, we used a previously described time-of-death analysis to identify transcripts throughout the genome that have a significant circadian rhythm in expression in the human prefrontal cortex [Brodmann’s area 11 (BA11) and BA47]. Expression levels were determined by microarray analysis in 146 individuals. Rhythmicity in expression was found in ∼10% of detected transcripts (P less than 0.05). Using a metaanalysis across the two brain areas, we identified a core set of 235 genes (q less than 0.05) with significant circadian rhythms of expression. These 235 genes showed 92% concordance in the phase of expression between the two areas. In addition to the canonical core circadian genes, a number of other genes were found to exhibit rhythmic expression in the brain. Notably, we identified more than 1,000 genes (1,186 in BA11; 1,591 in BA47) that exhibited age-dependent rhythmicity or alterations in rhythmicity patterns with aging. Interestingly, a set of transcripts gained rhythmicity in older individuals, which may represent a compensatory mechanism due to a loss of canonical clock function. Thus, we confirm that rhythmic gene expression can be reliably measured in human brain and identified for the first time (to our knowledge) significant changes in molecular rhythms with aging that may contribute to altered cognition, sleep, and mood in later life.

Wednesday, January 06, 2016

Nature's warning system.

A colleague in my Chaos and Complexity discussion group at the Univ. of Wisconsin passed on this Atlantic article in which my Zoology colleague Steve Carpenter is extensively quoted, which is well worth reading. Complex systems, like ecological food webs, the brain, and the climate, give off characteristic signals when a disastrous transformation is around the corner.

Tuesday, January 05, 2016

Higher inequality correlates with less generous rich people.

From Côté et. al.:
Research on social class and generosity suggests that higher-income individuals are less generous than poorer individuals. We propose that this pattern emerges only under conditions of high economic inequality, contexts that can foster a sense of entitlement among higher-income individuals that, in turn, reduces their generosity. Analyzing results of a unique nationally representative survey that included a real-stakes giving opportunity (n = 1,498), we found that in the most unequal US states, higher-income respondents were less generous than lower-income respondents. In the least unequal states, however, higher-income individuals were more generous. To better establish causality, we next conducted an experiment (n = 704) in which apparent levels of economic inequality in participants’ home states were portrayed as either relatively high or low. Participants were then presented with a giving opportunity. Higher-income participants were less generous than lower-income participants when inequality was portrayed as relatively high, but there was no association between income and generosity when inequality was portrayed as relatively low. This research finds that the tendency for higher-income individuals to be less generous pertains only when inequality is high, challenging the view that higher-income individuals are necessarily more selfish, and suggesting a previously undocumented way in which inequitable resource distributions undermine collective welfare... Our findings offer a more complete understanding of the association between income and generosity and have implications for contemporary debates about the social impact of unequal resource distributions.

Monday, January 04, 2016

A positive tonic: Human Progress Quantified

Edge.org has just posted responses to its 2016 question of the year "What do you consider the most interesting recent [scientific] news? What makes it important?" Over the next period of time I'm going to be posting edited clips from some of these responses, starting today with Steven Pinker's contribution on human progress:
Human intuition is a notoriously poor guide to reality...Fortunately, as the bugs in human cognition have become common knowledge, the workaround—objective data—has become more prevalent...Sports have been revolutionized by Moneyball, policy by Nudge, punditry by 538.com, forecasting by tournaments and prediction markets, philanthropy by effective altruism, the healing arts by evidence-based medicine.
The most interesting news is that the quantification of life has been extended to the biggest question of all: Have we made progress... in improving the human condition?...Most people agree that life is better than death, health better than disease, prosperity better than poverty, knowledge better than ignorance, peace better than war, safety better than violence, freedom better than coercion. That gives us a set of yardsticks by which we can measure whether progress has actually occurred. The interesting news is that the answer is mostly "yes."... the rate of homicides and war deaths had plummeted over time...People are living longer and healthier lives, not just in the developed world but globally. A dozen infectious and parasitic diseases are extinct or moribund. Vastly more children are going to school and learning to read. Extreme poverty has fallen worldwide from 85 to 10 percent. Despite local setbacks, the world is more democratic than ever. Women are better educated, marrying later, earning more, and in more positions of power and influence. Racial prejudice and hate crimes have decreased since data were first recorded. The world is even getting smarter: In every country, IQ has been increasing by three points a decade.
"Ecomodernists" such as Stewart Brand, Jesse Ausubel, and Ruth DeFries have shown that many indicators of environmental health have improved over the last half-century, and that there are long-term historical processes, such as the decarbonization of energy, the dematerialization of consumption, and the minimization of farmland that can be further encouraged...for all the ways in which the world today falls short of utopia, the norms and institutions of modernity have put us on a good track. We should work on improving them further, rather than burning them down in the conviction that nothing could be worse than our current decadence and in the vague hope that something better might rise from their ashes...quantified human progress emboldens us to seek more of it...The empowering feature of a graph is that it invites one to identify the forces that are pushing a curve up or down, and then to apply them to push it further in the same direction.

Friday, January 01, 2016

Boosting our brain plasticity.

Wow, this study by Forsyth et al. makes me want to run out and buy a bottle of D-cycloserine, which they show enhances experience-dependent learning, i.e. brain plasticity, in healthy adult humans. (Actually, it's expensive, and experimenting with it by yourself is not a good idea.)

 Significance
Experience-dependent plasticity is the capacity of the brain to undergo changes following environmental input and use, and is a primary means through which the adult brain enables new behavior. In the current study, we provide evidence that enhancing signaling at the glutamate N-methyl-D-aspartate receptor (NMDAR) can enhance the mechanism underlying many forms of experience-dependent plasticity (i.e., long-term potentiation of synaptic currents) and also enhance experience-dependent learning in healthy adult humans. This suggests exciting possibilities for manipulating plasticity in adults and has implications for treating neurological and neuropsychiatric disorders in which experience-dependent plasticity is impaired.
Abstract
Experience-dependent plasticity is a fundamental property of the brain. It is critical for everyday function, is impaired in a range of neurological and psychiatric disorders, and frequently depends on long-term potentiation (LTP). Preclinical studies suggest that augmenting N-methyl-D-aspartate receptor (NMDAR) signaling may promote experience-dependent plasticity; however, a lack of noninvasive methods has limited our ability to test this idea in humans until recently. We examined the effects of enhancing NMDAR signaling using D-cycloserine (DCS) on a recently developed LTP EEG paradigm that uses high-frequency visual stimulation (HFvS) to induce neural potentiation in visual cortex neurons, as well as on three cognitive tasks: a weather prediction task (WPT), an information integration task (IIT), and a n-back task. The WPT and IIT are learning tasks that require practice with feedback to reach optimal performance. The n-back assesses working memory. Healthy adults were randomized to receive DCS (100 mg; n = 32) or placebo (n = 33); groups were similar in IQ and demographic characteristics. Participants who received DCS showed enhanced potentiation of neural responses following repetitive HFvS, as well as enhanced performance on the WPT and IIT. Groups did not differ on the n-back. Augmenting NMDAR signaling using DCS therefore enhanced activity-dependent plasticity in human adults, as demonstrated by lasting enhancement of neural potentiation following repetitive HFvS and accelerated acquisition of two learning tasks. Results highlight the utility of considering cellular mechanisms underlying distinct cognitive functions when investigating potential cognitive enhancers.