Tuesday, December 04, 2018

More on the sociopathy of social media.

Languishing in my queue of potential posts have been two articles that I want to mention and pass on to readers.

Max Fisher writes on the unintended consequences of social media, from Myanmar to Germany:
I first went to Myanmar in early 2014, when the country was opening up, and there was no such thing as personal technology. Not even brick phones.
When I went back in late 2017, I could hardly believe it was the same country. Everybody had his or her nose in a smartphone, often logged in to Facebook. You’d meet with the same sources at the same roadside cafe, but now they’d drop a stack of iPhones on the table next to the tea.
It was like the purest possible experiment in what the same society looks like with or without modern consumer technology. Most people loved it, but it also helped drive genocidal violence against the Rohingya minority, empower military hard-liners and spin up riots.
...we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses. (Fisher has done articles on algorithm driven violence in Germany and Sri Lanka)
And, Rich Hardy points to further work linking social media use and feelings of depression and loneliness. Work of Hunt et al. suggests that decreasing one's social media use can lead to significant improvements in personal well-being.

Monday, December 03, 2018

Our brains are prediction machines. Friston's free-energy principle

Further reading on the article noted in the previous post has made me realize that I have been seriously remiss in not paying more attention to a revolution in how we view our brains. From a Karl Friston piece in Nature Neuroscience on predictive coding:
In the 20th century we thought the brain extracted knowledge from sensations. The 21st century witnessed a ‘strange inversion’, in which the brain became an organ of inference, actively constructing explanations for what’s going on ‘out there’, beyond its sensory epithelia.
And, key points from a Friston review, "The free-energy principle: a unified brain theory?:
Adaptive agents must occupy a limited repertoire of states and therefore minimize the long-term average of surprise associated with sensory exchanges with the world. Minimizing surprise enables them to resist a natural tendency to disorder.
Surprise rests on predictions about sensations, which depend on an internal generative model of the world. Although surprise cannot be measured directly, a free-energy bound on surprise can be, suggesting that agents minimize free energy by changing their predictions (perception) or by changing the predicted sensory inputs (action).
Perception optimizes predictions by minimizing free energy with respect to synaptic activity (perceptual inference), efficacy (learning and memory) and gain (attention and salience). This furnishes Bayes-optimal (probabilistic) representations of what caused sensations (providing a link to the Bayesian brain hypothesis).
Bayes-optimal perception is mathematically equivalent to predictive coding and maximizing the mutual information between sensations and the representations of their causes. This is a probabilistic generalization of the principle of efficient coding (the infomax principle) or the minimum-redundancy principle.
Learning under the free-energy principle can be formulated in terms of optimizing the connection strengths in hierarchical models of the sensorium. This rests on associative plasticity to encode causal regularities and appeals to the same synaptic mechanisms as those underlying cell assembly formation.
Action under the free-energy principle reduces to suppressing sensory prediction errors that depend on predicted (expected or desired) movement trajectories. This provides a simple account of motor control, in which action is enslaved by perceptual (proprioceptive) predictions.
Perceptual predictions rest on prior expectations about the trajectory or movement through the agent's state space. These priors can be acquired (as empirical priors during hierarchical inference) or they can be innate (epigenetic) and therefore subject to selective pressure.
Predicted motion or state transitions realized by action correspond to policies in optimal control theory and reinforcement learning. In this context, value is inversely proportional to surprise (and implicitly free energy), and rewards correspond to innate priors that constrain policies.

Friday, November 30, 2018

Being a Beast Machine: The Somatic Basis of Selfhood

Seth and Tsakiris offer a review in Trends in Cognitive Sciences, with the title of this post, that immediately caught my eye. I'm working on a lecture now that incorporates some of its themes. Here I pass on the abstract, motivated readers can obtain a copy of the full article from me.

Highlights
We conceptualise experiences of embodied selfhood in terms of control-oriented predictive regulation (allostasis) of physiological states.
We account for distinctive phenomenological aspects of embodied selfhood, including its (partly) non-object-like nature and its subjective stability over time.
We explain predictive perception as a generalisation from a fundamental biological imperative to maintain physiological integrity: to stay alive.
We bring together several cognitive science traditions, including predictive processing, perceptual control theory, cybernetics, the free energy principle, and sensorimotor contingency theory.
We show how perception of the world around us, and of ourselves within it, happens with, through, and because of our living bodies.
We draw implications for developmental psychology and identify open questions in psychiatry and artificial intelligence.
Abstract
Modern psychology has long focused on the body as the basis of the self. Recently, predictive processing accounts of interoception (perception of the body ‘from within’) have become influential in accounting for experiences of body ownership and emotion. Here, we describe embodied selfhood in terms of ‘instrumental interoceptive inference’ that emphasises allostatic regulation and physiological integrity. We apply this approach to the distinctive phenomenology of embodied selfhood, accounting for its non-object-like character and subjective stability over time. Our perspective has implications for the development of selfhood and illuminates longstanding debates about relations between life and mind, implying, contrary to Descartes, that experiences of embodied selfhood arise because of, and not in spite of, our nature as ‘beast machines’.

Thursday, November 29, 2018

A molecular basis for the placebo effect.

Several popular articles point to work I wish I had been more aware off. Gary Greenberg in the NYTimes, and Cari Romm in The Atlantic, point to work of Kathryn Hall and collaborators showing that placebo responses are strongest in patients with a variant of a gene (COMT, which regulates the amount of dopamine in the brain) that causes higher levels of dopamine, which is linked to pain the the good feeling that come with reward. Irritable bowel syndrome patients with the high-dopamine version of the gene were more likely to report that the placebo treatment had relieved their symptoms, an effect that was even stronger in the group that had received their treatment from a caring provider. Variations in the COMT gene locus are unlikely to fully account for a complex behavior like the placebo response, but contribute to the puzzle. Here is the abstract from the Hall et al. paper:
• Predisposition to respond to placebo treatment may be in part a stable heritable trait. 
• Candidate placebo response pathways may interact with drugs to modify outcomes in both the placebo and drug treatment arms of clinical trials. 
• Genomic analysis of randomized placebo and no-treatment controlled trials are needed to fully realize the potential of the placebome.
Placebos are indispensable controls in randomized clinical trials (RCTs), and placebo responses significantly contribute to routine clinical outcomes. Recent neurophysiological studies reveal neurotransmitter pathways that mediate placebo effects. Evidence that genetic variations in these pathways can modify placebo effects raises the possibility of using genetic screening to identify placebo responders and thereby increase RCT efficacy and improve therapeutic care. Furthermore, the possibility of interaction between placebo and drug molecular pathways warrants consideration in RCT design. The study of genomic effects on placebo response, ‘the placebome’, is in its infancy. Here, we review evidence from placebo studies and RCTs to identify putative genes in the placebome, examine evidence for placebo–drug interactions, and discuss implications for RCTs and clinical care.

Wednesday, November 28, 2018

Factoids about an ideal gas.

I pass on this neat slide from a lecture by physics professor Clint Sprott ("Ergodicity in Chaotic Oscillators") given to the Nov. 20 session of the Chaos and Complex Systems Seminar at Univ. of Wisc. Madison.


Tuesday, November 27, 2018

Impacts of outdoor artificial light on plant and animal species.

Gaston does a perspective article describing how the nighttime lighting up of our planet is profoundly disturbing the activities of many animal and plant species. I pass on three paragraphs:
Artificial light at night can usefully be thought of as having two linked components. The first component—direct emissions from outdoor lighting sources, which include streetlights, building and infrastructure lighting, and road vehicle headlamps—is spatially extremely heterogeneous. Ground-level illuminance in the immediate vicinity can vary from less than 10 lux (lx) to more than 100 lx (for context, a full moon on a clear night has an illuminance of up to 0.1 lx). It often declines rapidly over distances of a few meters. However, emissions from unshielded lights can, when unobstructed, carry horizontally over many kilometers, making artificial light at night both an urban and a rural issue.
The second component of artificial light at night is skyglow, the brightening of the nighttime sky caused mainly by upwardly emitted and reflected artificial light that is scattered in the atmosphere by water, dust, and gas molecules. Although absolute illuminance levels are at most about 0.2 to 0.5 lx, much lower than those from direct emissions, these are often sufficiently high to obscure the Milky Way, which is used for orientation by some organisms. In many urban areas, skyglow even obscures lunar light cycles, which are used by many organisms as cues for biological activity.

In the laboratory, organismal responses, such as suppression of melatonin levels and changes to behavioral activity patterns, generally increase with greater intensities of artificial light at night. It is challenging to establish the form of such functional relationships in the field, but experiments and observations have shown that commonplace levels of artificial light at night influence a wide range of biological phenomena across a wide diversity of taxa, including individual physiology and behavior, species abundances and distributions, community structure and dynamics, and ecosystem function and process. Exposure to even dim nighttime lighting (below 1 lx) can drastically change activity patterns of both naturally day-active and night-active species. These effects can be exacerbated by trophic interactions, such that the abundances of species whose activity is not directly altered may nonetheless be severely affected under low levels of nighttime lighting.

Monday, November 26, 2018

Dietary fat: From foe to friend?

The title of the post is the title of one of the articles in a special section of the Nov. 16 issue of Science devoted to Diet and Health. I want to pass on the abstract of this article, as well as the list of points of consensus that emerge from many different studies cited in the article. It emphasizes the importance of which particular fat or carbohydrate sources are consumed:

Abstract
For decades, dietary advice was based on the premise that high intakes of fat cause obesity, diabetes, heart disease, and possibly cancer. Recently, evidence for the adverse metabolic effects of processed carbohydrate has led to a resurgence in interest in lower-carbohydrate and ketogenic diets with high fat content. However, some argue that the relative quantity of dietary fat and carbohydrate has little relevance to health and that focus should instead be placed on which particular fat or carbohydrate sources are consumed. This review, by nutrition scientists with widely varying perspectives, summarizes existing evidence to identify areas of broad consensus amid ongoing controversy regarding macronutrients and chronic disease.



Points of consensus.
1. With a focus on nutrient quality, good health and low chronic disease risk can be achieved for many people on diets with a broad range of carbohydrate-to-fat ratios. 
2. Replacement of saturated fat with naturally occurring unsaturated fats provides health benefits for the general population. Industrially produced trans fats are harmful and should be eliminated. The metabolism of saturated fat may differ on carbohydrate-restricted diets, an issue that requires study. 
3. Replacement of highly processed carbohydrates (including refined grains, potato products, and free sugars) with unprocessed carbohydrates (nonstarchy vegetables, whole fruits, legumes, and whole or minimally processed grains) provides health benefits. 
4. Biological factors appear to influence responses to diets of differing macronutrient composition. People with relatively normal insulin sensitivity and β cell function may do well on diets with a wide range of carbohydrate-to-fat ratios; those with insulin resistance, hypersecretion of insulin, or glucose intolerance may benefit from a lower-carbohydrate, higher-fat diet. 
5. A ketogenic diet may confer particular metabolic benefits for some people with abnormal carbohydrate metabolism, a possibility that requires long-term study. 
6. Well-formulated low-carbohydrate, high-fat diets do not require high intakes of protein or animal products. Reduced carbohydrate consumption can be achieved by substituting grains, starchy vegetables, and sugars with nonhydrogenated plant oils, nuts, seeds, avocado, and other high-fat plant foods. 
7. There is broad agreement regarding the fundamental components of a healthful diet that can serve to inform policy, clinical management, and individual dietary choice. Nonetheless, important questions relevant to the epidemics of diet-related chronic disease remain. Greater investment in nutrition research should assume a high priority.

Friday, November 23, 2018

Social learning circuits in the brain.

Allsop et al. at MIT, observe brain circuits that let an animal learn from the experience of others: 


Highlights
•Neurons in cortex and amygdala respond to cues that predict shock to another mouse 
•Cortex → amygdala neurons preferentially represent socially derived information 
•Cortical input to amygdala instructs encoding of observationally learned cues 
•Corticoamygdala inhibition impairs observational learning and social interaction 
Summary
Observational learning is a powerful survival tool allowing individuals to learn about threat-predictive stimuli without directly experiencing the pairing of the predictive cue and punishment. This ability has been linked to the anterior cingulate cortex (ACC) and the basolateral amygdala (BLA). To investigate how information is encoded and transmitted through this circuit, we performed electrophysiological recordings in mice observing a demonstrator mouse undergo associative fear conditioning and found that BLA-projecting ACC (ACC→BLA) neurons preferentially encode socially derived aversive cue information. Inhibition of ACC→BLA alters real-time amygdala representation of the aversive cue during observational conditioning. Selective inhibition of the ACC→BLA projection impaired acquisition, but not expression, of observational fear conditioning. We show that information derived from observation about the aversive value of the cue is transmitted from the ACC to the BLA and that this routing of information is critically instructive for observational fear conditioning.

Thursday, November 22, 2018

Conversation with your angry uncle over Thanksgiving - a chat bot.

I have to pass on this gem from this morning's NYTimes. Karin Tamerius, founder of "Smart Politics" offers a chat bot to help train you for a conversation with a relative in a political tribe different from yours. The trick is to not engage their defensive mechanisms, but to remain empathetic and interactive, sharing your own experience. The summary points.:
1. Ask open-ended, genuinely curious, nonjudgmental questions. 
2. Listen to what people you disagree with say and deepen your understanding with follow-up inquiries. 
3. Reflect back their perspective by summarizing their answers and noting underlying emotions. 
4. Agree before disagreeing by naming ways in which you agree with their point of view.   
5. Share your perspective by telling a story about a personal experience. 
At the heart of the method is a simple idea: People cannot communicate effectively about politics when they feel threatened. Direct attacks – whether in the form of logical argument, evidence, or name-calling – trigger the sympathetic nervous system, limiting our capacity for reason, empathy, and self-reflection. To have productive conversations, we first need to make people feel safe. 
Most political conversations founder because challenges to our beliefs trigger our sympathetic nervous system. The goal is ensuring people feel safe enough during political dialogues to avoid this. That way the rational part of their brains stays in control and they’re better able to hear, absorb and adapt to new information. 
While it’s a powerful approach, it isn’t easy. It takes patience, tolerance and conscious engagement to get through all five steps. The method puts the burden for keeping the conversation calm on you: Not only must you not trigger the other person, but you must not get triggered yourself. 
Given the challenge, it’s tempting to avoid political discussions in mixed company altogether. Why risk provoking your angry uncle when you can chat about pumpkin pie instead? The answer is that when we choose avoidance over engagement, we are sacrificing a critical opportunity and responsibility to facilitate social and political change. 
Throughout American history, important strides were made because people dared to share their political views with relatives. The civil rights movement, the women’s movement, the antiwar movement, the gay rights movement, the struggle for marriage equality – all gained acceptance through difficult conversations among family members who initially disagreed vehemently with one another. 
To improve political discourse, remember your goal isn’t to score points, vent or put people in their place; it’s to make a difference. And that means sharing your message in a way that people who disagree with you – including your angry uncle – can hear.

Top-down and Bottom-up causation in the emergence of complexity.

I want to pass on just the first section of a commentary by George F.R. Ellis on a paper by Aharonov et al., whose evidence and analysis support a top–down structure in quantum mechanics according to which higher-order correlations can always determine lower-order ones, but not vice versa. Ellis puts this in the context of top-down versus bottom-up causation in the emergence of complexity at higher levels of organization.
The nature of emergence of complexity out of the underlying physics is a key issue in understanding the world around us.  Genuine emergence can be claimed to depend on top-down causation, which enables higher emergent levels to direct the outcomes of causation at lower levels to fulfill higher-level causal requirements; for example, the needs of heart physiology at the systems level determine gene expression at the cellular level via gene regulatory networks (see Figure, click to enlarge). However, the idea of top-down causation has been denied by a number of commentators. The paper by Aharonov et al. makes a strong contribution to this debate by giving quantum physics examples where top-down causation manifestly occurs. This physics result has strong implications for the philosophical debate about whether strong emergence is possible. Indeed, it gives specific examples where it occurs in a remarkably strong form.
Now, the word “causation” is regarded with suspicion by many philosophers of science, so to characterize what is happening one can perhaps rather use a number of different descriptions such as “whole–part constraint” or “top-down realization.” The key point remains the same, that higher levels can influence lower-level outcomes in many ways, and hence explain how strong emergence is possible. This occurs across science in general, and in physics in particular. The latter point is key because of the alleged causal completeness of physics, which supposition underlies supervenience arguments against strong emergence and the supposed possibility of overdetermination of lower-level outcomes. However, if top-down action occurs in physics in general, and in quantum physics — the bottom level of the hierarchy of emergence (See Figure) — in particular, such claims are undermined.

Wednesday, November 21, 2018

REM sleep in naps and memory consolidation in typical and Down syndrome children.

From Spano et al.:
Sleep is recognized as a physiological state associated with learning, with studies showing that knowledge acquisition improves with naps. Little work has examined sleep-dependent learning in people with developmental disorders, for whom sleep quality is often impaired. We examined the effect of natural, in-home naps on word learning in typical young children and children with Down syndrome (DS). Despite similar immediate memory retention, naps benefitted memory performance in typical children but hindered performance in children with DS, who retained less when tested after a nap, but were more accurate after a wake interval. These effects of napping persisted 24 h later in both groups, even after an intervening overnight period of sleep. During naps in typical children, memory retention for object-label associations correlated positively with percent of time in rapid eye movement (REM) sleep. However, in children with DS, a population with reduced REM, learning was impaired, but only after the nap. This finding shows that a nap can increase memory loss in a subpopulation, highlighting that naps are not universally beneficial. Further, in healthy preschooler’s naps, processes in REM sleep may benefit learning.

Tuesday, November 20, 2018

The ecstasy of speed - or leisure?

The Google Blogger platform by Deric's MindBlog emails me comments on posts to approve (or delete, or mark as spam). The almost daily comments are usually platitudes unrelated to a post that contain links to a commercial site. Sometimes serendipity strikes as I read the post, before rejecting the comment, and find it so relevant to the present that I think it worth repeating. Here is such a post from September 13, 2016:

Because I so frequently feel overwhelmed by input streams of chunks of information, I wonder how readers of this blog manage to find time to attend to its contents. (I am gratified that so many seem to do so.) Thoughts like this made me pause over Maria Popova's recent essay on our anxiety about time. I want to pass on a few clips, and recommend that you read all of it. She quotes extensively from James Gleick's book published in 2000: "Faster: The Acceleration of Just About Everything.", and begins by noting a 1918 Bertrand Russell quote, “both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
Half a century after German philosopher Josef Pieper argued that leisure is the basis of culture and the root of human dignity, Gleick writes:
We are in a rush. We are making haste. A compression of time characterizes the life of the century....We have a word for free time: leisure. Leisure is time off the books, off the job, off the clock. If we save time, we commonly believe we are saving it for our leisure. We know that leisure is really a state of mind, but no dictionary can define it without reference to passing time. It is unrestricted time, unemployed time, unoccupied time. Or is it? Unoccupied time is vanishing. The leisure industries (an oxymoron maybe, but no contradiction) fill time, as groundwater fills a sinkhole. The very variety of experience attacks our leisure as it attempts to satiate us. We work for our amusement...Sociologists in several countries have found that increasing wealth and increasing education bring a sense of tension about time. We believe that we possess too little of it: that is a myth we now live by.
To fully appreciate Gleick’s insightful prescience, it behooves us to remember that he is writing long before the social web as we know it, before the conspicuous consumption of “content” became the currency of the BuzzMalnourishment industrial complex, before the timelines of Twitter and Facebook came to dominate our record and experience of time. (Prescience, of course, is a form of time travel — perhaps our only nonfictional way to voyage into the future.) Gleick writes:
We live in the buzz. We wish to live intensely, and we wonder about the consequences — whether, perhaps, we face the biological dilemma of the waterflea, whose heart beats faster as the temperature rises. This creature lives almost four months at 46 degrees Fahrenheit but less than one month at 82 degrees...Yet we have made our choices and are still making them. We humans have chosen speed and we thrive on it — more than we generally admit. Our ability to work fast and play fast gives us power. It thrills us… No wonder we call sudden exhilaration a rush.
Gleick considers what our units of time reveal about our units of thought:
We have reached the epoch of the nanosecond. This is the heyday of speed. “Speed is the form of ecstasy the technical revolution has bestowed on man,” laments the Czech novelist Milan Kundera, suggesting by ecstasy a state of simultaneous freedom and imprisonment… That is our condition, a culmination of millennia of evolution in human societies, technologies, and habits of mind.
The more I experience and read about the winding up and acceleration of our lives (think of the rate and omnipresence of the current presidential campaign!),  the more I realize the importance of rediscovering the sanity of leisure and quiet spaces.

Monday, November 19, 2018

Practicing gratitude, kindness, and compassion - can our i-devices help?

My Apple Watch occasionally, and unexpectedly, prompts me to stop and breathe (does it not like the pulse that it is measuring?). Noticing whether you are holding your breath or breathing can be very useful (The title of one my web lectures is “Are you holding your breath? - Structures of arousal and calm). My Univ. of Wisconsin colleague Richard Davidson writes a brief piece suggesting that this sort of prompting might be carried a bit further, to enhance other beneficial behaviors, suggesting that As technology permeates our lives, it should be designed to boost our kindness, empathy, and happiness.
...tech giants Apple and Google recently announced new software improvements to empower iPhone and Android smartphone users to be more aware and potentially limit smartphone use. I certainly think it’s a necessary step in the right direction. But is it enough? I see this as one of the first admissions by these companies that their technologies have powerful effects on us as humans—effects we have been discovering as we all participate in this grand experiment that none of us signed up for.
This admission by the technology leaders opens the door to a huge opportunity to start designing the interactions and the actual contents of what we consume to prioritize the well-being of users. For instance, what if artificial intelligence used in virtual assistants like Apple’s Siri or Amazon’s Alexa were designed to detect variations in the tone of voice to determine when someone was struggling with loneliness or depression and to intervene by providing a simple mental exercise to cultivate well-being? Or a mental health resource? This is one idea tech leaders are exploring more seriously, and for good reason.
In our lab at UW–Madison, we’re looking to make video game play a prosocial and entertaining experience for kids. In collaboration with video games experts, our lab created a research video game to train empathy in kids, which has shown potential in changing circuits of the brain that underlie empathy in some middle schoolers.
We’re exploring similar programs in adults that go above and beyond meditation apps for people to participate in bite-sized mental training practices that help them connect with others, as well as deepen their attention and resilience. What if your next smartphone notification were a prompt to reflect on what you’re grateful for or a challenge to take a break from your device and notice the natural environment? We know that activities like cultivating gratitude and spending time in nature or connecting with loved ones can have therapeutic effects. There’s nothing stopping us from integrating these reminders into our digital lives.
Ultimately, I think it will take soul-searching from companies and consumers to get us closer to technologies that truly help and don’t hinder the nurturing of user well-being.
We have a moral obligation to take what we know about the human mind and harness it in this ever-changing digital frontier to promote well-being. I think we can succeed if we can deliberately design our systems to nurture the basic goodness of people. This is a vision in which human flourishing would be supported, rather than diminished, by the rapidly evolving technology that is shaping our minds.

Friday, November 16, 2018

Self driving cars will have to decide who should live and who should die.

Johnson points to a collaboration by Awad et al. that explored the moral dilemmas faced by autonomous vehicles. They designed an experimental platform (The Moral Machine Website) that gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. A few clips from Johnson's summary:
The study...identified a few preferences that were strongest: People opt to save people over pets, to spare the many over the few and to save children and pregnant women over older people. But it also found other preferences for sparing women over men, athletes over obese people and higher status people, such as executives, instead of homeless people or criminals. There were also cultural differences in the degree, for example, that people would prefer to save younger people over the elderly in a cluster of mostly Asian countries.
Outside researchers said the results were interesting, but cautioned that the results could be overinterpreted. In a randomized survey, researchers try to ensure that a sample is unbiased and representative of the overall population, but in this case the voluntary study was taken by a population that was predominantly younger men. The scenarios are also distilled, extreme and far more black-and-white than the ones that are abundant in the real world, where probabilities and uncertainty are the norm.
“The big worry that I have is that people reading this are going to think that this study is telling us how to implement a decision process for a self-driving car,” said Benjamin Kuipers, a computer scientist at University of Michigan, who was not involved in the work.
“Building these cars, the process is not really about saying, ‘If I’m faced with this dilemma, who am I going to kill.’ It’s saying, 'If we can imagine a situation where this dilemma could occur, what prior decision should I have made to avoid this?” Kuipers said.
And here is a TEDxCambridge talk by Iyad Rahwan "The Social Dilemma Of Driverless Cars"


Thursday, November 15, 2018

Biomarkers of inflamation are lower in people with more positive emotions

Fron Ong et al.:
There is growing evidence that inflammatory responses may help to explain how emotions get “under the skin” to influence disease susceptibility. Moving beyond examination of individuals’ average level of emotion, this study examined how the breadth and relative abundance of emotions that individuals experience — emodiversity — is related to systemic inflammation. Using diary data from 175 adults aged 40 to 65 who provided end-of-day reports of their positive and negative emotions over 30 days, we found that greater diversity in day-to-day positive emotions was associated with lower circulating levels of inflammation (indicated by IL-6, CRP, fibrinogen), independent of mean levels of positive and negative emotions, body mass index, anti-inflammatory medications, medical conditions, personality, and demographics. No significant associations were observed between global or negative emodiversity and inflammation. These findings highlight the unique role daily positive emotions play in biological health. (PsycINFO Database Record (c) 2018 APA, all rights reserved)

Wednesday, November 14, 2018

New longevity vitamins?

Well known senior biochemist Bruce Ames (b. 1928) suggests an array of compounds should be added to the list of essential vitamins that maintain health and enhance longevity:
It is proposed that proteins/enzymes be classified into two classes according to their essentiality for immediate survival/reproduction and their function in long-term health: that is, survival proteins versus longevity proteins. As proposed by the triage theory, a modest deficiency of one of the nutrients/cofactors triggers a built-in rationing mechanism that favors the proteins needed for immediate survival and reproduction (survival proteins) while sacrificing those needed to protect against future damage (longevity proteins). Impairment of the function of longevity proteins results in an insidious acceleration of the risk of diseases associated with aging. I also propose that nutrients required for the function of longevity proteins constitute a class of vitamins that are here named “longevity vitamins.” I suggest that many such nutrients play a dual role for both survival and longevity. The evidence for classifying taurine as a conditional vitamin, and the following 10 compounds as putative longevity vitamins, is reviewed: the fungal antioxidant ergothioneine; the bacterial metabolites pyrroloquinoline quinone (PQQ) and queuine; and the plant antioxidant carotenoids lutein, zeaxanthin, lycopene, α- and β-carotene, β-cryptoxanthin, and the marine carotenoid astaxanthin. Because nutrient deficiencies are highly prevalent in the United States (and elsewhere), appropriate supplementation and/or an improved diet could reduce much of the consequent risk of chronic disease and premature aging.
By the way, Ames is a co-founder of Juvenon, a company that markets anti-aging supplements.

Tuesday, November 13, 2018

Are we getting too hysterical about the dangers of artificial intelligence?

It is certainly true that A.I. might take away the current jobs of people like lawyers and radiologists who scan data looking for patterns, or those who are now doing tasks that can be accomplished by defined algorithms. A series of articles, such as those by and about Yuval Harari, predict that most of us will become a human herd manipulated by digital overlords that know more about us than we know about ourselves.  These have to be taken very seriously. (See, for example "Watch Out Workers, Algorithms Are Coming to Replace You — Maybe,"Tech C.E.O.s Are in Love With Their Principal Doomsayer," and "Why Technology Favors Tyranny")

However, there are arguments that one fear - that machines having a general flexible human like intelligence really similar or even superior to our own will render common humans obsolete - is not yet even remotely realistic. The current deep learning algorithms sifting deep data for patterns and connections work at the level of our unconscious cognition, and don't engage context, ambiguity, and alternative scenarios in the way that our cognitive apparatus can. One can find some solace in how easy it is to fool A.I. sophistical pattern recognition systems (see "Hackers easily fool artificial intelligence") and how hapless A.I. systems are in dealing with the actual meaning of what they are doing, or why they are doing it (See "Artificial Intelligence Hits the Barrier of Meaning". Intelligence is a measure of ability to achieve a particular aim, to deploy novel means to attain a goal, whatever it happens to be, the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. Any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as stupid.

One clip from the last link noted above, a choice quote from A.I. researcher Pedro Domingos:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.


Monday, November 12, 2018

Even a 10 minute walk can boost your brain

From Suwabe et al.:

Significance
Our previous work has shown that mild physical exercise can promote better memory in rodents. Here, we use functional MRI in healthy young adults to assess the immediate impact of a short bout of mild exercise on the brain mechanisms supporting memory processes. We find that this brief intervention rapidly enhanced highly detailed memory processing and resulted in elevated activity in the hippocampus and the surrounding regions, as well as increased coupling between the hippocampus and cortical regions previously known to support detailed memory processing. These findings represent a mechanism by which mild exercise, on par with yoga and tai chi, may improve memory. Future studies should test the long-term effects of regular mild exercise on age-related memory loss.
Abstract
Physical exercise has beneficial effects on neurocognitive function, including hippocampus-dependent episodic memory. Exercise intensity level can be assessed according to whether it induces a stress response; the most effective exercise for improving hippocampal function remains unclear. Our prior work using a special treadmill running model in animals has shown that stress-free mild exercise increases hippocampal neuronal activity and promotes adult neurogenesis in the dentate gyrus (DG) of the hippocampus, improving spatial memory performance. However, the rapid modification, from mild exercise, on hippocampal memory function and the exact mechanisms for these changes, in particular the impact on pattern separation acting in the DG and CA3 regions, are yet to be elucidated. To this end, we adopted an acute-exercise design in humans, coupled with high-resolution functional MRI techniques, capable of resolving hippocampal subfields. A single 10-min bout of very light-intensity exercise (30%V˙O2peak) results in rapid enhancement in pattern separation and an increase in functional connectivity between hippocampal DG/CA3 and cortical regions (i.e., parahippocampal, angular, and fusiform gyri). Importantly, the magnitude of the enhanced functional connectivity predicted the extent of memory improvement at an individual subject level. These results suggest that brief, very light exercise rapidly enhances hippocampal memory function, possibly by increasing DG/CA3−neocortical functional connectivity.

Friday, November 09, 2018

Facebook language predicts depression in medical records.

Eichstaedt et al., suggest that analysis of language used by consenting individuals in their social media accounts could provide a depression assessment that complements existing screening and monitoring procedures:
Depression, the most prevalent mental illness, is underdiagnosed and undertreated, highlighting the need to extend the scope of current screening methods. Here, we use language from Facebook posts of consenting individuals to predict depression recorded in electronic medical records. We accessed the history of Facebook statuses posted by 683 patients visiting a large urban academic emergency department, 114 of whom had a diagnosis of depression in their medical records. Using only the language preceding their first documentation of a diagnosis of depression, we could identify depressed patients with fair accuracy [area under the curve (AUC) = 0.69], approximately matching the accuracy of screening surveys benchmarked against medical records. Restricting Facebook data to only the 6 months immediately preceding the first documented diagnosis of depression yielded a higher prediction accuracy (AUC = 0.72) for those users who had sufficient Facebook data. Significant prediction of future depression status was possible as far as 3 months before its first documentation. We found that language predictors of depression include emotional (sadness), interpersonal (loneliness, hostility), and cognitive (preoccupation with the self, rumination) processes. Unobtrusive depression assessment through social media of consenting individuals may become feasible as a scalable complement to existing screening and monitoring procedures.

Thursday, November 08, 2018

Advancing front of old-age human survival

Zuo et al. examine the probabilities of death at ages past 65 years for males and females in developed countries; that is, they consider individuals in each year who are alive at age 65 y and thereafter experience death rates for that year. They conclude that an advancing old-age front characterizes old-age human survival in 20 developed countries. The long-term speed of the advancing front is ≃0.12 y per calendar year, about 3 y per human generation. Thus, the front implies that, e.g., age 68 y today is equivalent, in terms of mortality, to age 65 y a generation ago. Their finding of a shifting front in the percentiles of death at old age is consistent with some patterns of shifts in old-age mortality hazards.
Old-age mortality decline has driven recent increases in lifespans, but there is no agreement about trends in the age pattern of old-age deaths. Some argue that old-age deaths should become compressed at advanced ages, others argue that old-age deaths should become more dispersed with age, and yet others argue that old-age deaths are consistent with little change in dispersion. However, direct analysis of old-age deaths presents unusual challenges: Death rates at the oldest ages are always noisy, published life tables must assume an asymptotic age pattern of deaths, and the definition of “old-age” changes as lives lengthen. Here we use robust percentile-based methods to overcome some of these challenges and show, for five decades in 20 developed countries, that old-age survival follows an advancing front, like a traveling wave. The front lies between the 25th and 90th percentiles of old-age deaths, advancing with nearly constant long-term shape but annual fluctuations in speed. The existence of this front leads to several predictions that we verify, e.g., that advances in life expectancy at age 65 y are highly correlated with the advance of the 25th percentile, but not with distances between higher percentiles. Our unexpected result has implications for biological hypotheses about human aging and for future mortality change.