Friday, July 19, 2019

It’s never simple...The tidy textbook story about the primary visual cortex is wrong.

When I was a postdoc in the Harvard Neurobiology department in the mid-1960’s I used to have afternoon tea with the Hubel and Weisel group. These are the guys who got a Nobel prize for, among other things, finding that the primary visual cortex is organized into cortical columns of cells that responded to lines that prefer different orientations. Another grouping of columns, called ‘blobs’ responded selectively to color and brightness but not orientation. These two different kinds of groups sent their outputs to higher visual areas that were supposed to integrate the information. My neurobiology course lectures and my Biology of Mind book showed drawings illustrating these tidy distinctions.

Sigh… now Garg et al. come along with two-photon calcium imaging to probe a very large spatial and chromatic visual stimulus space and map functional microarchitecture of thousands of neurons with single-cell resolution. They show that processing of orientation and color is combined at the earliest stages of visual processing, totally challenging the existing model. Their abstract:
Previous studies support the textbook model that shape and color are extracted by distinct neurons in primate primary visual cortex (V1). However, rigorous testing of this model requires sampling a larger stimulus space than previously possible. We used stable GCaMP6f expression and two-photon calcium imaging to probe a very large spatial and chromatic visual stimulus space and map functional microarchitecture of thousands of neurons with single-cell resolution. Notable proportions of V1 neurons strongly preferred equiluminant color over achromatic stimuli and were also orientation selective, indicating that orientation and color in V1 are mutually processed by overlapping circuits. Single neurons could precisely and unambiguously code for both color and orientation. Further analyses revealed systematic spatial relationships between color tuning, orientation selectivity, and cytochrome oxidase histology.

Wednesday, July 17, 2019

An anti-aging pill with some credibility....

An AARP article points to studies on the new drug RTB101, which boosts immune function in older people and lowers risk for respiratory diseases. By virtue of it's inhibition of the multiprotein complex TORC1, which mediates temporal control of cell growth, it is also a potential anti-aging drug. Below is text from the anti-aging website
resTORbio, Inc. is developing RTB101, an oral medication that inhibits target of rapamycin complex 1 (TORC1). Using a combination of RTB101 and everolimus, another TORC1 inhibitor, the company is testing a comprehensive immunotherapy program to fight respiratory tract infections in the elderly, improving the immune system rather than attempting to target individual infectious agents.
The mammalian target of rapamycin (mTOR) pathway uses the multiprotein complexes TORC1 and TORC2 to conduct signaling. While inhibiting TORC2 has been observed to decrease lifespan, TORC1 inhibition has been shown to have multiple positive effects, including enhanced brain functions, reduction of adipose tissue, and delaying the onset of age-related pathology. Inhibiting TORC1 with both RTB101 and everolimus decreases S6K while increasing 4EBP1 and Atg; affecting each of these downstream products in this way is reported to enhance lifespan.
In 2018, the company reported that a phase 2b trial of RTB101 has returned positive results for boosting the immune system to respond better to respiratory infections. The trial saw 652 aged people with increased risk of RTIs enroll and compared to the control group, there were significantly fewer patients treated with RTB101 who suffered from one or more RTIs during a 16-week trial period. The next step for resTORbio is to conduct a study of its effects on other infections, heart failure, or autophagy-related diseases later in 2018.
Concluding the successful Phase 2 and 2b studies the company has agreed with the FDA to proceed to a large scale phase 3 clinical trial scheduled to begin later in 2019...resTORbio has licensed the worldwide rights for its product from Novartis Limited, a major pharmaceutical company...Website: resTORbio

Monday, July 15, 2019

Correlating our our physical and mental experiences of stress linked to psychological and physical well-being

Davidson and colleagues at the Univ. of Wisconsin show that individual differences in the ability to associate subjective stress and heart rate are related to psychological and physical well being. Here is a description of the work from a newsletter, followed by the abstract from Psychological Sciences. They:
...analyzed data from 1,065 participants in the Midlife in the United States (MIDUS) study, a longitudinal effort looking at well-being as adults age. Participants completed a series of stressful computer tasks, including a mental math task and a color identification task.
Before, during and after the tasks, researchers measured participants’ heart rate and asked them to rate their stress on a scale of one to 10 throughout the study.
After the participants completed the stress tests, researchers compared each person’s heart rate to the stress levels they reported — a measure called “stress-heart rate coherence”— and found that some people’s stress levels aligned with their heart rate better than others.
To examine the link between stress-heart rate coherence and people’s emotional well-being, researchers used psychological questionnaires focused on well-being, depression, anxiety and coping as well as blood samples measuring inflammation markers. Researchers found that people with greater stress-heart rate coherence had fewer symptoms of anxiety and depression, greater overall psychological well-being and lower levels of inflammation.
Here is the journal article's abstract:
The physiological response to stress is intertwined with, but distinct from, the subjective feeling of stress, although both systems must work in concert to enable adaptive responses. We investigated 1,065 participants from the Midlife in the United States 2 study who completed a self-report battery and a stress-induction procedure while physiological and self-report measures of stress were recorded. Individual differences in the association between heart rate and self-reported stress were analyzed in relation to measures that reflect psychological well-being (self-report measures of well-being, anxiety, depression), denial coping, and physical well-being (proinflammatory biomarkers interleukin-6 and C-reactive protein). The within-participants association between heart rate and self-reported stress was significantly related to higher psychological well-being, fewer depressive symptoms, lower trait anxiety, less use of denial coping, and lower levels of proinflammatory biomarkers. Our results highlight the importance of studying individual differences in coherence between physiological measures and subjective mental states in relation to well-being.
And here is a PDF of the article.

Friday, July 12, 2019

The Problem With ‘Sharenting’

An article in the NYTimes 'Privacy Project' by Kamenetz resonates with my own experience of very mixed reactions to viewing some of what I consider the most private and intimate details of the lives of my 5 and 7 year old grandsons occasionally revealed in some of the Facebook posts by their parents. I feel embarrassed for the boys, and sometimes how they would feel in their 20s and 30s if they were to look back at these posts on their childhood. (A pre-facebook version of the situation from my childhood is a photograph of two legs - five year old Deric - protruding from under a newspaper sitting on the cute!). Some clips from the Kamenetz piece:
Today, many children’s social media presence starts with a sonogram, posted, obviously, without consent. One study from Britain found that nearly 1,500 images of the average child had been placed online by their fifth birthday. Parents get a lot of gratification from telling kids’ stories online...It’s less clear what our children have to gain from their lives being broadcast in this way. ..parents’ rights to free speech and self-expression are at odds with children’s rights to privacy when they are young and vulnerable...This is especially true when the information is potentially damaging. Imagine a child who has behavior problems, learning disabilities or chronic illness. Mom or Dad understandably want to discuss these struggles and reach out for support. But those posts live on the internet, with potential to be discovered by college admissions officers and future employers, friends and romantic prospects. A child’s life story is written for him before he has a chance to tell it himself.
Even if you confine your posts about your children to sunny days and birthday parties, any information you provide about them — names, dates of birth, geographic location — could be acquired by data brokersClose X, companies that collect personal information and sell it to advertisers.
Finally, there’s display and commodification. In 2018, the top earner on YouTube, according to Forbes, was a 7-year-old boy who brought in $22 million by playing with toys. It’s never seemed more accessible to become famous at a wee age, and the type of children who used to sing into a hairbrush in the mirror are often clamoring to start their own channels today.
The most egregious abuses are just the tip of the iceberg, though. For every moneymaking influencer, there are millions of less-successful stage parents and wannabes scratching for followers on YouTube and Instagram. They’re out there shoving cameras in children’s faces, using up their free time, killing spontaneity, warping the everyday rituals of childhood into long working shoots.
When it comes to childhood and technology, we adults are the horror show.

Wednesday, July 10, 2019

Looking for a dose of optimism?

I want to point to a recent NYTimes Op-Ed piece by David Brooks that is a bit more upbeat than his frequent hand wringing over the dissolution of old verities. A few clips from its later paragraphs:
The reality and challenge is that America has become radically pluralistic. We used to be unipolar — one dominant majority culture and a lot of minority groups that defined themselves against it. Now we’re multipolar. We’re all minorities now.
That could blow us to smithereens. But who knows? We could learn to be minorities together, to be what Rabbi Jonathan Sacks calls creative minorities. In a brilliant 2013 lecture, Sacks noted that when Solomon’s temple was destroyed and the Jews were cast into exile, the prophet Jeremiah had a surprising message: Go to new lands. Build houses. Plant gardens. Seek the peace and prosperity of the cities in which you settle.
In a world of radical pluralism, we are all Jews. We have no choice but to build a mass multicultural democracy, a society that has no dominant center but is a collection of creative minorities...Nearly 200 years ago, Tocqueville wrote that democracy was creating a new sort of man. Pluralism today is creating a new sort of person, especially among the young. They don’t just relish diversity; they embody it. Many have mixed roots — say, half-French/half-Dominican. Many are border stalkers; they live between cultures, switch back and forth, and work hard to build a multiplicity of influences into a single coherent life. They’re Whitmanesque, containing multitudes, holding opposite ideas in their minds at the same time.
Radical pluralism also necessitates retelling the nation’s history. We’ve always been a universal nation, a crossroads nation, a nation whose very identity is defined by the fact that it is a hub for a dense network of minorities and subgroups, and the distinct way of life they fashion to interact and flourish together.
I used to think that America had to find a new unifying national narrative. Now I wonder if not having a single national narrative will become our national narrative.

Monday, July 08, 2019

Subgroups of gay men correspond to different biodevelopmental pathways

Swift-Gallant et al. consider three established biomarkers of sexual orientation and suggest they reflect distinct biodevelopmental pathways influencing same-sex sexual orientation in men. They describe these biomarkers in their introduction:
A well-established biomarker of sexual orientation is familiality of male same-sex sexual orientation. Same-sex sexual orientation clusters in families, twin studies show greater sexual orientation concordance among monozygotic than dizygotic twins, and molecular genetic studies have identified candidate genes associated with sexual orientation.
A second well-studied biomarker of sexual orientation is handedness. Although the biological underpinnings of handedness are not yet clear, increasing evidence suggests that handedness is a marker of cerebral lateralization determined prenatally by genetic, immunological, and endocrine mechanisms and/or by developmental instability... it is estimated that men have 20% greater odds of non−right-handedness than women, and gay men have 34% greater odds of being non−right-handed than heterosexual men.
A third well-established biomarker of sexual orientation is the fraternal birth order effect). Across a diverse range of cultures and sample types, studies have shown that older brothers increase the odds of androphilia in later-born males. The maternal immune hypothesis is the best-developed explanation of the fraternal birth order effect. It argues that male antigens enter maternal circulation during the gestation and birthing of male offspring, promoting an immune response to these male-specific antigens that increases with each successive male fetus gestated; thus, with each successive pregnancy with a male fetus, the odds increase that these maternal antibodies will affect sexual differentiation of the brain and behavior, including sexual preferences.
Their Significance and Abstract statements:  

Studying individual differences in gender and sexual orientation provides insight into how early-life biology shapes brain and behavior. The literature identifies multiple biodevelopmental influences on male sexual orientation, but these influences are generally studied individually, and the potential for association or interaction between them remains largely unexplored. We hypothesized that distinct biodevelopmental pathways correspond to specific subgroups of nonheterosexual men. We present evidence that nonheterosexual men can be categorized into at least four subgroups based on established biomarkers, and these biodevelopmental pathways differentially relate to gender expression and personality traits. These findings indicate individual differences in biodevelopmental pathways of male sexual orientation. They also illustrate the value of latent profile analyses for studying individual differences.
Several biological mechanisms have been proposed to influence male sexual orientation, but the extent to which these mechanisms cooccur is unclear. Putative markers of biological processes are often used to evaluate the biological basis of male sexual orientation, including fraternal birth order, handedness, and familiality of same-sex sexual orientation; these biomarkers are proxies for immunological, endocrine, and genetic mechanisms. Here, we used latent profile analysis (LPA) to assess whether these biomarkers cluster within the same individuals or are present in different subgroups of nonheterosexual men. LPA defined four profiles of men based on these biomarkers: 1) A subgroup who did not have these biomarkers, 2) fraternal birth order, 3) handedness, and 4) familiality. While the majority of both heterosexual and nonheterosexual men were grouped in the profile that did not have any biomarker, the three profiles associated with a biomarker were composed primarily of nonheterosexual men. We then evaluated whether these subgroups differed on measures of gender nonconformity and personality that reliably show male sexual orientation differences. The subgroup without biomarkers was the most gender-conforming whereas the fraternal birth order subgroup was the most female-typical and agreeable, compared with the other profiles. Together, these findings suggest there are multiple distinct biodevelopmental pathways influencing same-sex sexual orientation in men.

Friday, July 05, 2019

Social Media - no effect on adolescent life satisfaction?

Orben et al. (open source article) provide a study whose results contest a common opinion, reinforced by several previous studies, that adolescents who use social media extensively are more likely to be depressed and have low self esteem. They used...
...large-scale representative panel data to disentangle the between-person and within-person relations linking adolescent social media use and well-being. We found that social media use is not, in and of itself, a strong predictor of life satisfaction across the adolescent population. Instead, social media effects are nuanced, small at best, reciprocal over time, gender specific, and contingent on analytic methods.
They note limitations of current published research:
Focused on cross-sectional relations, scientists have few means of parsing longitudinal effects from artifacts introduced by common statistical modeling methodologies. Furthermore, the volume of data under analysis, paired with unchecked analytical flexibility, enables selective research reporting, biasing the literature toward statistically significant effects. Nevertheless, trivial trends are routinely overinterpreted by those under increasing pressure to rapidly craft evidence-based policies.
The UK study examined data on 12,672 10-15 year old. Two summary graphics are provided. One clip:
...the importance of gender was apparent: Only 16% of significant models arose from male data.
The last two paragraphs:
The relations linking social media use and life satisfaction are, therefore, more nuanced than previously assumed: They are inconsistent, possibly contingent on gender, and vary substantively depending on how the data are analyzed. Most effects are tiny—arguably trivial; where best statistical practices are followed, they are not statistically significant in more than half of models. That understood, some effects are worthy of further exploration and replication: There might be small reciprocal within-person effects in females, with increases in life satisfaction predicting slightly lower social media use, and increases in social media use predicting tenuous decreases in life satisfaction.
With the unknowns of social media effects still substantially outnumbering the knowns, it is critical that independent scientists, policymakers, and industry researchers cooperate more closely. Scientists must embrace circumspection, transparency, and robust ways of working that safeguard against bias and analytical flexibility. Doing so will provide parents and policymakers with the reliable insights they need on a topic most often characterized by unfounded media hype. Finally, and most importantly, social media companies must support independent research by sharing granular user engagement data and participating in large-scale team-based open science. Only then will we truly unravel the complex constellations of effects shaping young people in the digital age.

Wednesday, July 03, 2019

Think twice about metformin as an anti-aging drug.

Gretchen Reynolds points to work suggesting that use of metformin (the most commonly used Type 2 diabetes drug) as an anti-aging agent by healthy active people (because it reduces inflammation and causes other cellular effects that alter aging) may have a downside. Konopka et al. report that it suppresses the anti-aging effects of exercise, notably exercise-related gains in muscle-cell mitochondrial respiration. There is the usual caveat that this is a single study with a relatively small number of subjects (53). Here is their technical abstract:
Metformin and exercise independently improve insulin sensitivity and decrease the risk of diabetes. Metformin was also recently proposed as a potential therapy to slow aging. However, recent evidence indicates that adding metformin to exercise antagonizes the exercise‐induced improvement in insulin sensitivity and cardiorespiratory fitness. The purpose of this study was to test the hypothesis that metformin diminishes the improvement in insulin sensitivity and cardiorespiratory fitness after aerobic exercise training (AET) by inhibiting skeletal muscle mitochondrial respiration and protein synthesis in older adults (62 ± 1 years). In a double‐blinded fashion, participants were randomized to placebo (n = 26) or metformin (n = 27) treatment during 12 weeks of AET. Independent of treatment, AET decreased fat mass, HbA1c, fasting plasma insulin, 24‐hr ambulant mean glucose, and glycemic variability. However, metformin attenuated the increase in whole‐body insulin sensitivity and VO2max after AET. In the metformin group, there was no overall change in whole‐body insulin sensitivity after AET due to positive and negative responders. Metformin also abrogated the exercise‐mediated increase in skeletal muscle mitochondrial respiration. The change in whole‐body insulin sensitivity was correlated to the change in mitochondrial respiration. Mitochondrial protein synthesis rates assessed during AET were not different between treatments. The influence of metformin on AET‐induced improvements in physiological function was highly variable and associated with the effect of metformin on the mitochondria. These data suggest that prior to prescribing metformin to slow aging, additional studies are needed to understand the mechanisms that elicit positive and negative responses to metformin with and without exercise.

Monday, July 01, 2019

Your professional decline.

Arthur Brooks, former president of the conservative American Enterprise Institute think tank and New York Times Op-Ed writer, does an essay in the Atlantic in which he contemplates his professional decline, making points that are universally relevant. Some clips:
...happiness of most adults declines through their 30s and 40s, then bottoms out in their early 50s...Almost all studies of happiness over the life span show that, in wealthier countries, most people’s contentment starts to increase again in their 50s, until age 70 or so. That is where things get less predictable, however. After 70, some people stay steady in happiness; others get happier until death. Others—men in particular—see their happiness plummet. Indeed, depression and suicide rates for men increase after age 75...A few researchers have looked at this cohort to understand what drives their unhappiness. It is, in a word, irrelevance.
This is especially an issue in gifted and accomplished people.
...accomplishment is a well-documented source of happiness. If current accomplishment brings happiness, then shouldn’t the memory of that accomplishment provide some happiness as well?...Though the literature on this question is sparse, giftedness and achievements early in life do not appear to provide an insurance policy against suffering later on...abundant evidence suggests that the waning of ability in people of high accomplishment is especially brutal psychologically...Call it the Principle of Psychoprofessional Gravitation: the idea that the agony of professional oblivion is directly related to the height of professional prestige previously achieved, and to one’s emotional attachment to that prestige...the memory of remarkable ability, if that is the source of one’s self-worth, might, for some, provide an invidious contrast to a later, less remarkable life.
In some professions, early decline is inescapable. No one expects an Olympic athlete to remain competitive until age 60. But in many physically nondemanding occupations, we implicitly reject the inevitability of decline before very old age. Sure, our quads and hamstrings may weaken a little as we age. But as long as we retain our marbles, our quality of work as a writer, lawyer, executive, or entrepreneur should remain high up to the very end, right? ...The data are shockingly clear that for most people, in most fields, decline starts earlier than almost anyone thinks...if you start a career in earnest at 30, expect to do your best work around 50 and go into decline soon after that...the most common age for producing a magnum opus is the late 30s...the likelihood of a major discovery increases steadily through one’s 20s and 30s and then declines through one’s 40s, 50s, and 60s. Are there outliers? Of course. But the likelihood of producing a major innovation at age 70 is approximately what it was at age 20—almost nonexistent.
In sum, if your profession requires mental processing speed or significant analytic capabilities—the kind of profession most college graduates occupy—noticeable decline is probably going to set in earlier than you imagine...Whole sections of bookstores are dedicated to becoming successful. The shelves are packed with titles like The Science of Getting Rich and The 7 Habits of Highly Effective People. There is no section marked “Managing Your Professional Decline.”
Brooks contrasts the declines of Charles Darwin, who became embittered and inactive after his younger most creative period had passed, with Johann Sebastian Bach, who redesigned his life - as baroque music was being replaced by the "classical" style - moving from being an innovator to being a teacher and instructor.
The lesson for you and me, especially after 50: Be Johann Sebastian Bach, not Charles Darwin...How does one do that? A potential answer lies in the work of the British psychologist Raymond Cattell, who in the early 1940s introduced the concepts of fluid and crystallized intelligence. Cattell defined fluid intelligence as the ability to reason, analyze, and solve novel problems—what we commonly think of as raw intellectual horsepower. ...It is highest relatively early in adulthood and diminishes starting in one’s 30s and 40s...Crystallized intelligence, in contrast, is the ability to use knowledge gained in the past. Think of it as possessing a vast library and understanding how to use it. It is the essence of wisdom. Because crystallized intelligence relies on an accumulating stock of knowledge, it tends to increase through one’s 40s, and does not diminish until very late in life...poets—highly fluid in their creativity—tend to have produced half their lifetime creative output by age 40 or so. Historians—who rely on a crystallized stock of knowledge—don’t reach this milestone until about 60...No matter what mix of intelligence your field requires, you can always endeavor to weight your career away from innovation and toward the strengths that persist, or even increase, later in life...teaching is an ability that decays very late in life, a principal exception to the general pattern of professional decline over time...the most profound insights tend to come from those in their 30s and early 40s. The best synthesizers and explainers of complicated ideas—that is, the best teachers—tend to be in their mid-60s or older, some of them well into their 80s.
Most Eastern philosophy warns that focusing on acquisition leads to attachment and vanity, which derail the search for happiness by obscuring one’s essential nature. As we grow older, we shouldn’t acquire more, but rather strip things away to find our true selves—and thus, peace.
At some point, writing one more book will not add to my life satisfaction; it will merely stave off the end of my book-writing career. The canvas of my life will have another brushstroke that, if I am being forthright, others will barely notice, and will certainly not appreciate very much. The same will be true for most other markers of my success.
What I need to do, in effect, is stop seeing my life as a canvas to fill, and start seeing it more as a block of marble to chip away at and shape something out of. I need a reverse bucket list. My goal for each year of the rest of my life should be to throw out things, obligations, and relationships until I can clearly see my refined self in its best form.
Hindu philosophy—and indeed the wisdom of many philosophical traditions—suggests that you should be prepared to walk away from the rewards of success before you feel ready. Even if you’re at the height of your professional prestige, you probably need to scale back your career ambitions in order to scale up your metaphysical ones.
David Brooks talks about the difference between “résumé virtues” and “eulogy virtues,”...Résumé virtues are professional and oriented toward earthly success. They require comparison with others. Eulogy virtues are ethical and spiritual, and require no comparison. Your eulogy virtues are what you would want people to talk about at your funeral. As in He was kind and deeply spiritual, not He made senior vice president at an astonishingly young age and had a lot of frequent-flier miles....To move from résumé virtues to eulogy virtues is to move from activities focused on the self to activities focused on others. abundance of research strongly suggests that happiness—not just in later years but across the life span—is tied directly to the health and plentifulness of one’s relationships. Pushing work out of its position of preeminence—sooner rather than later—to make space for deeper relationships can provide a bulwark against the angst of professional decline.
The aspen tree is an excellent metaphor for a successful person—but not, it turns out, for its solitary majesty. Above the ground, it may appear solitary. Yet each individual tree is part of an enormous root system, which is together one plant...The secret to bearing my decline—to enjoying it—is to become more conscious of the roots linking me to others. If I have properly developed the bonds of love among my family and friends, my own withering will be more than offset by blooming in others.

Friday, June 28, 2019

Perception as controlled hallucination - predictive processing and the nature of conscious experience

I've now read several times through a fascinating conversation with philosopher Andy Clark. I suggest you read the piece, and here pass on some edited clips. First, his comments on most current A.I. efforts:
There's something rather passive about the kinds of artificial intelligence ...[that are]...trained on an objective function. The AI tries to do a particular thing for which it might be exposed to an awful lot of data in trying to come up with ways to do this thing. But at the same time, it doesn't seem to inhabit bodies or inhabit worlds; it is solving problems in a disembodied, disworlded space. The nature of intelligence looks very different when we think of it as a rolling process that is embedded in bodies or embedded in worlds. Processes like that give rise to real understandings of a structured world.
Then, his ideas on how our internal and external worlds are a continuum:
Perception itself is a kind of controlled hallucination. You experience a structured world because you expect a structured world, and the sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception.
The Bayesian brain, predictive processing, hierarchical predictive coding are all, roughly speaking, names for the same picture in which experience is constructed at the shifting borderline between sensory evidence and top-down prediction or expectation. There's been a big literature out there on the perceptual side of things. It's a fairly solid literature. What predictive processing did that I found particularly interesting—and this is mostly down to a move that was made by Karl Friston—was apply the same story to action. In action, what we're doing is making a certain set of predictions about the shape of the sensory information that would result if I were to perform the action. Then you get rid of prediction errors relative to that predicted flow by making the action.
There's a pleasing symmetry there. Once you've got action on the table in these stories—the idea is that we bring action about by predicting sensory flows that are non actual and then getting rid of prediction errors relative to those sensory flows by bringing the action about—that means that epistemic action, as it's sometimes called, is right there on the table. Systems like that cannot just act in the world to fulfill their goals; they can also act in the world so as to get better information to fulfill their goals. And that's something that active animals do all the time. The chicken, when it bobs its head around, is moving its sensors around to get information that allows it to do depth perception that it can't do unless it bobs its head around...Epistemic action, and practical action, and perception, and understanding are now all rolled together in this nice package.
An upshot here is that there's no experience without the application of some model to try to sift what is worthwhile for a creature like you in the signal and what isn't worthwhile for a creature like you.
Apart from the exteroceptive signals that we take in from vision, sound, and so on, and apart from the proprioceptive signals from our body that are what we predict in order to move our body around, there's also all of the interoceptive signals that are coming from the heart and from the viscera, et cetera...being subtly inflected by interoception information is part of what makes our conscious experience of the world the kind of experience that it is. So, artificial systems without interoception could perceive their world in an exteroceptive way, they could act in their world, but they would be lacking what seems to me to be one important dimension of what it is to be a conscious human being in the world.

Wednesday, June 26, 2019

Implicit racial bias is preserved by historical roots of social environments.

Payne et al. note that geographic differences in implicit racial bias correlate with the number of slaves in those areas in 1860.

Geographic variation in implicit bias is associated with multiple racial disparities in life outcomes. We investigated the historical roots of geographical differences in implicit bias by comparing average levels of implicit bias with the number of slaves in those areas in 1860. Counties and states more dependent on slavery in 1860 displayed higher pro-White implicit bias today among White residents and less pro-White bias among Black residents. Mediation analyses suggest that historical oppression may be transmitted into contemporary biases through structural inequalities, including disparities in poverty and upward mobility. Given the importance of contextual factors, efforts to reduce unintended discrimination might focus on modifying social environments that cue implicit biases in the minds of individuals.
Implicit racial bias remains widespread, even among individuals who explicitly reject prejudice. One reason for the persistence of implicit bias may be that it is maintained through structural and historical inequalities that change slowly. We investigated the historical persistence of implicit bias by comparing modern implicit bias with the proportion of the population enslaved in those counties in 1860. Counties and states more dependent on slavery before the Civil War displayed higher levels of pro-White implicit bias today among White residents and less pro-White bias among Black residents. These associations remained significant after controlling for explicit bias. The association between slave populations and implicit bias was partially explained by measures of structural inequalities. Our results support an interpretation of implicit bias as the cognitive residue of past and present structural inequalities.

Monday, June 24, 2019

Over 50-fold difference between individuals in circadian melatonin sensitivity to evening light.

Phillips et al. probe how our high sensitivity to artificial light after dusk perturbs the circadian rhythm of our sleep hormone melatonin:

Electric lighting has fundamentally altered how the human circadian clock synchronizes to the day/night cycle. Exposure to light after dusk is pervasive in the modern world. We examined group-level sensitivity of the circadian system to evening light and the degree to which sensitivity varies between individuals. We found that, on average, humans are highly sensitive to evening light. Specifically, 50% suppression of melatonin occurred at less than 30 lux, which is comparable to or lower than typical indoor lighting used at night, as well as light produced by electronic devices. Significantly, there was a greater than 50-fold difference in sensitivity to evening light across individuals. Interindividual differences in light sensitivity may explain differential vulnerability to circadian disruption and subsequent impact on human health.
Before the invention of electric lighting, humans were primarily exposed to intense (>300 lux) or dim (less than 30 lux) environmental light—stimuli at extreme ends of the circadian system’s dose–response curve to light. Today, humans spend hours per day exposed to intermediate light intensities (30–300 lux), particularly in the evening. Interindividual differences in sensitivity to evening light in this intensity range could therefore represent a source of vulnerability to circadian disruption by modern lighting. We characterized individual-level dose–response curves to light-induced melatonin suppression using a within-subjects protocol. Fifty-five participants (aged 18–30) were exposed to a dim control (less than 1 lux) and a range of experimental light levels (10–2,000 lux for 5 h) in the evening. Melatonin suppression was determined for each light level, and the effective dose for 50% suppression (ED50) was computed at individual and group levels. The group-level fitted ED50 was 24.60 lux, indicating that the circadian system is highly sensitive to evening light at typical indoor levels. Light intensities of 10, 30, and 50 lux resulted in later apparent melatonin onsets by 22, 77, and 109 min, respectively. Individual-level ED50 values ranged by over an order of magnitude (6 lux in the most sensitive individual, 350 lux in the least sensitive individual), with a 26% coefficient of variation. These findings demonstrate that the same evening-light environment is registered by the circadian system very differently between individuals. This interindividual variability may be an important factor for determining the circadian clock’s role in human health and disease.

Friday, June 21, 2019

Mechanism of exercise and antioxidant stimulation of memory and new nerve cell growth

On reading this article by Yook et al. I promptly ordered a bottle of 10 mg astaxanthin capsules to add to my normal array of supplements (and exercise).

Leptin (LEP, a small protein hormone), produced and acting in the hippocampus, mediates enhancement by mild exercise (ME) of hippocampus-related memory and neurogenesis, which are further increased by an antioxidant carotenoid, astaxanthin (AX). Both are facilitated by the administration of ME or AX alone. The up-regulation of the LEP gene and LEP protein expression in the hippocampus by ME is further elevated when combined with AX. Consistently, the combined interventions increased hippocampal LEP protein. In LEP-deficient ob/ob mice, LEP replacement in the brain restored the ability of ME+AX to enhance hippocampal function. Thus, a combined lifestyle intervention based on ME, including yoga and tai chi, and specific dietary supplements that include antioxidants may together improve cognition and possibly retard cognitive decline in humans.
Regular exercise and dietary supplements with antioxidants each have the potential to improve cognitive function and attenuate cognitive decline, and, in some cases, they enhance each other. Our current results reveal that low-intensity exercise (mild exercise, ME) and the natural antioxidant carotenoid astaxanthin (AX) each have equivalent beneficial effects on hippocampal neurogenesis and memory function. We found that the enhancement by ME combined with AX in potentiating hippocampus-based plasticity and cognition is mediated by leptin (LEP) made and acting in the hippocampus. In assessing the combined effects upon wild-type (WT) mice undergoing ME with or without an AX diet for four weeks, we found that, when administrated alone, ME and AX separately enhanced neurogenesis and spatial memory, and when combined they were at least additive in their effects. DNA microarray and bioinformatics analyses revealed not only the up-regulation of an antioxidant gene, ABHD3, but also that the up-regulation of LEP gene expression in the hippocampus of WT mice with ME alone is further enhanced by AX. Together, they also increased hippocampal LEP (h-LEP) protein levels and enhanced spatial memory mediated through AKT/STAT3 signaling. AX treatment also has direct action on human neuroblastoma cell lines to increase cell viability associated with increased LEP expression. In LEP-deficient mice (ob/ob), chronic infusion of LEP into the lateral ventricles restored the synergy. Collectively, our findings suggest that not only h-LEP but also exogenous LEP mediates effects of ME on neural functions underlying memory, which is further enhanced by the antioxidant AX.

Wednesday, June 19, 2019

Enhancing longevity by removing deteriorated body cells.

Here I pass on both the introductory summary and concluding paragraph of van Deursen's review of efforts to enhance longevity by removing body cells that have deteriorated and become dysfunctional (SNCs).
The estimated “natural” life span of humans is ∼30 years, but improvements in working conditions, housing, sanitation, and medicine have extended this to ∼80 years in most developed countries. However, much of the population now experiences aging-associated tissue deterioration. Healthy aging is limited by a lack of natural selection, which favors genetic programs that confer fitness early in life to maximize reproductive output. There is no selection for whether these alterations have detrimental effects later in life. One such program is cellular senescence, whereby cells become unable to divide. Cellular senescence enhances reproductive success by blocking cancer cell proliferation, but it decreases the health of the old by littering tissues with dysfunctional senescent cells (SNCs). In mice, the selective elimination of SNCs (senolysis) extends median life span and prevents or attenuates age-associated diseases. This has inspired the development of targeted senolytic drugs to eliminate the SNCs that drive age-associated disease in humans.
As knowledge of the fundamental biology and vulnerabilities of SNCs expands, the rational design of targeted senolytics is expected to yield therapies to eliminate SNCs that drive degeneration and disease. This positive outlook is based on successes in oncology and because the main limitation of cancer therapies—the clonal expansion of drug-resistant cells—does not apply to SNCs. Additional confidence comes from the recent progress in bringing senolytic agents into clinical trials. The first clinical trial is testing UBX0101 for the treatment of osteoarthritis of the knee. Another drug, UBX1967, a BCL-2 family inhibitor specifically tailored for diseases of the aging eye, is also advancing to human testing. Multiple clinical trials treating diverse diseases of aging with senolytic drugs are expected to follow soon. This includes two-step cancer treatment approaches whereby malignant cells are first forced into a senescent state by one drug and then eliminated with a senolytic agent. Success in these first clinical studies is the next critical milestone on the road to the development of treatments that can extend healthy longevity in people.

Monday, June 17, 2019

Why can we read only one word at a time?

Fascinating work from White et al.:

Because your brain has limited processing capacity, you cannot comprehend the text on this page all at once. In fact, skilled readers cannot even recognize just two words at once. We measured how the visual areas of the brain respond to pairs of words while participants attended to one word or tried to divide attention between both. We discovered that a single word-selective region in left ventral occipitotemporal cortex processes both words in parallel. The parallel streams of information then converge at a bottleneck in an adjacent, more anterior word-selective region. This result reveals the functional significance of subdivisions within the brain’s reading circuitry and offers a compelling explanation for a profound limit on human perception.
In most environments, the visual system is confronted with many relevant objects simultaneously. That is especially true during reading. However, behavioral data demonstrate that a serial bottleneck prevents recognition of more than one word at a time. We used fMRI to investigate how parallel spatial channels of visual processing converge into a serial bottleneck for word recognition. Participants viewed pairs of words presented simultaneously. We found that retinotopic cortex processed the two words in parallel spatial channels, one in each contralateral hemisphere. Responses were higher for attended than for ignored words but were not reduced when attention was divided. We then analyzed two word-selective regions along the occipitotemporal sulcus (OTS) of both hemispheres (subregions of the visual word form area, VWFA). Unlike retinotopic regions, each word-selective region responded to words on both sides of fixation. Nonetheless, a single region in the left hemisphere (posterior OTS) contained spatial channels for both hemifields that were independently modulated by selective attention. Thus, the left posterior VWFA supports parallel processing of multiple words. In contrast, activity in a more anterior word-selective region in the left hemisphere (mid OTS) was consistent with a single channel, showing (i) limited spatial selectivity, (ii) no effect of spatial attention on mean response amplitudes, and (iii) sensitivity to lexical properties of only one attended word. Therefore, the visual system can process two words in parallel up to a late stage in the ventral stream. The transition to a single channel is consistent with the observed bottleneck in behavior.

Friday, June 14, 2019

Why high-class people get away with incompetence.

Belmi et al. do four experiments suggesting that people who come from a higher social class are more likely to have an inflated sense of their skills — even when tests proved that they are average. This unmerited overconfidence is interpreted by strangers as competence. This highlights yet another way that family wealth and parents' education confers an advantage in getting ahead in life.
Understanding how socioeconomic inequalities perpetuate is a central concern among social and organizational psychologists. Drawing on a collection of findings suggesting that different social class contexts have powerful effects on people’s sense of self, we propose that social class shapes the beliefs that people hold about their abilities, and that this, in turn, has important implications for how status hierarchies perpetuate. We first hypothesize that compared with individuals with relatively low social class, individuals with relatively high social class are more overconfident. Then, drawing on research suggesting that overconfidence can confer social advantages, we further hypothesize that the overconfidence of higher class individuals can help perpetuate the existing class hierarchy: It can provide them a path to social advantage by making them appear more competent in the eyes of others. We test these ideas in four large studies with a combined sample of 152,661 individuals. Study 1, a large field study featuring small-business owners from Mexico, found evidence that individuals with relatively high social class are more overconfident compared with their lower-class counterparts. Study 2, a multiwave study in the United States, replicated this result and further shed light on the underlying mechanism: Individuals with relatively high (vs. low) social class tend to be more overconfident because they have a stronger desire to achieve high social rank. Study 3 replicated these findings in a high-powered, preregistered study and found that individuals with relatively high social class were more overconfident, even in a task in which they had no performance advantages. Study 4, a multiphase study that featured a mock job interview in the laboratory, found that compared with their lower-class counterparts, higher-class individuals were more overconfident; overconfidence, in turn, made them appear more competent and more likely to attain social rank.

Wednesday, June 12, 2019

Policy evaluation by randomized trials may provoke greater objections than implementing them untested.

Meyer et al. find that experiments comparing two unobjectionable policies or treatments can generate more objections than simply proceeding to implement them.

Randomized experiments—long the gold standard in medicine—are increasingly used throughout the social sciences and professions to evaluate business products and services, government programs, education and health policies, and global aid. We find robust evidence—across 16 studies of 5,873 participants from three populations spanning nine domains—that people often approve of untested policies or treatments (A or B) being universally implemented but disapprove of randomized experiments (A/B tests) to determine which of those policies or treatments is superior. This effect persists even when there is no reason to prefer A to B and even when recipients are treated unequally and randomly in all conditions (A, B, and A/B). This experimentation aversion may be an important barrier to evidence-based practice.
Randomized experiments have enormous potential to improve human welfare in many domains, including healthcare, education, finance, and public policy. However, such “A/B tests” are often criticized on ethical grounds even as similar, untested interventions are implemented without objection. We find robust evidence across 16 studies of 5,873 participants from three diverse populations spanning nine domains—from healthcare to autonomous vehicle design to poverty reduction—that people frequently rate A/B tests designed to establish the comparative effectiveness of two policies or treatments as inappropriate even when universally implementing either A or B, untested, is seen as appropriate. This “A/B effect” is as strong among those with higher educational attainment and science literacy and among relevant professionals. It persists even when there is no reason to prefer A to B and even when recipients are treated unequally and randomly in all conditions (A, B, and A/B). Several remaining explanations for the effect—a belief that consent is required to impose a policy on half of a population but not on the entire population; an aversion to controlled but not to uncontrolled experiments; and a proxy form of the illusion of knowledge (according to which randomized evaluations are unnecessary because experts already do or should know “what works”)—appear to contribute to the effect, but none dominates or fully accounts for it. We conclude that rigorously evaluating policies or treatments via pragmatic randomized trials may provoke greater objection than simply implementing those same policies or treatments untested.

Monday, June 10, 2019

Is technology really reshaping our consciousness?

My first reaction on reading a recent Op-Ed piece by David Brooks "When Trolls and Crybullies Rule the Earth" was to say 'Yes!', and think I should immediately fire off some selected clips in a MindBlog post. I'm glad I waited a bit, because as I look at it again I feel he has erred on the side of being an alarmist drama queen to grab our attention. The article begins with:
Over the past several years, teenage suicide rates have spiked horrifically....What's going on?
Several sources show an increasing rate from 2010 to 2017, but a look at Centers for Disease Control data shows higher rates for the late 1980's and early 1990's. He continues:
My answer starts with technology but is really about the sort of consciousness online life induces.
Brooks then describes transformations of human consciousness as if they have replaced, rather than adding to and enhancing, older forms of consciousness:  the shift from an oral to a printed culture centuries ago and the current shift from printed to electronic communication.
Attention and affection have gone from being private bonds to being publicly traded goods...up until recently most of the attention a person received came from family and friends and was pretty stable. But now most of the attention a person receives can come from far and wide and is tremendously volatile...your online post can go viral and get massively admired or ridiculed, while other times your post can leave you alone and completely ignored. Communication itself, once mostly collaborative, is now often competitive, with bids for affection and attention. It is also more manipulative — gestures designed to generate a response.
But... were not the old fashioned kinds of attention exchanged personally or in crowds of people, rather than electronically, labile and competitive with constant bids for affection and attention? Electronics may have amplified what was happening, but it didn't fundamentally transform it. Vicious gossip can be exchanged in old fashioned personal or newer less personal electronic ways. Online platforms may be an amplifier of our traits, but they don't basically transform them. Trolls and crybullies have always been with us. Still, Brooks makes good points, even if a bit exaggerated:
The internet has become a place where people communicate out of their competitive ego: I’m more fabulous than you (a lot of Instagram). You’re dumber than me (much of Twitter). It’s not a place where people share from their hearts and souls.
Of course, people enmeshed in such a climate are more likely to feel depressed, to suffer from mental health problems. Of course, they are more likely to see human relationship through the abuser/victim frame, and to be acutely sensitive to any power imbalance. Imagine you’re 17 and people you barely know are saying nice or nasty things about your unformed self. It creates existential anxiety and hence fanaticism.

Friday, June 07, 2019

The wisdom of partisan crowds.

Fascinating and counterintuitive findings from Becker et al., who find that the wisdom of crowds is robust to partisan bias:

Normative theories of deliberative democracy are based on the premise that social information processing can improve group beliefs. Research on the “wisdom of crowds” has found that information exchange can increase belief accuracy in many cases, but theories of political polarization imply that groups will become more extreme—and less accurate—when beliefs are motivated by partisan political bias. While this risk is not expected to emerge in politically heterogeneous networks, homogeneous social networks are expected to amplify partisan bias when people communicate only with members of their own political party. However, we find that the wisdom of crowds is robust to partisan bias. Social influence not only increases accuracy but also decreases polarization without between-group network ties.
Theories in favor of deliberative democracy are based on the premise that social information processing can improve group beliefs. While research on the “wisdom of crowds” has found that information exchange can increase belief accuracy on noncontroversial factual matters, theories of political polarization imply that groups will become more extreme—and less accurate—when beliefs are motivated by partisan political bias. A primary concern is that partisan biases are associated not only with more extreme beliefs, but also with a diminished response to social information. While bipartisan networks containing both Democrats and Republicans are expected to promote accurate belief formation, politically homogeneous networks are expected to amplify partisan bias and reduce belief accuracy. To test whether the wisdom of crowds is robust to partisan bias, we conducted two web-based experiments in which individuals answered factual questions known to elicit partisan bias before and after observing the estimates of peers in a politically homogeneous social network. In contrast to polarization theories, we found that social information exchange in homogeneous networks not only increased accuracy but also reduced polarization. Our results help generalize collective intelligence research to political domains.

Wednesday, June 05, 2019

Inequality brokered

Public opinion about sexual minorities has improved dramatically in recent years, but Sun and Gao do an analysis showing that discriminatory behavior in lending practices has been ongoing and gone unchecked.

We propose a method to infer people’s sexual orientation indirectly through gender disclosure of the borrower and coborrower in a mortgage. Furthermore, we examine lending practices toward same-sex borrowers and its spillover effects. We attempt to extend the research on race/gender discrimination by systematically investigating the potentially different lending treatment toward same-sex borrowers. The data reveal that, compared with otherwise similar different-sex applicants, same-sex applicants are 73.12% more likely to be denied, and they tend to be charged up to 0.2% higher fees/interest. Furthermore, neighborhoods’ higher same-sex population density adversely affects both same-sex and different-sex borrowers’ lending experiences. Our method might approximately measure the US homosexual population distribution up to the census tract level annually over decades.
Using massive US mortgage lending data, we propose a method to infer a borrower’s sexual orientation indirectly without a self-identification requirement and demonstrate the method’s potential to approximately measure the sexual orientation of the US population at the local level annually over decades. We continue to examine the lending practices to same-sex borrowers and its spillover effects. The persistent results since 1990 reveal that, in contrast with otherwise comparable different-sex loan applicants, the approval rate for same-sex applicants is ∼3–8% lower. Furthermore, conditional on approval, lenders, on average, charge about 0.02–0.2% higher interest to same-sex borrowers, which is equivalent to an annual total of $8.6 million to $86 million in additional interest/fees nationwide. Meanwhile, we find that same-sex borrowers are less risky overall, as they exhibit similar default risk but lower prepayment risk. Finally, we document findings of spillover effects. That is, when the share of a neighborhood’s same-sex population increases, both same-sex and different-sex borrowers seem to experience more unfavorable lending outcomes overall. The findings should raise enough concerns to warrant further investigations.