Exercise induces beneficial responses in the brain, which is accompanied by an increase in BDNF, a trophic factor associated with cognitive improvement and the alleviation of depression and anxiety. However, the exact mechanisms whereby physical exercise produces an induction in brain Bdnf gene expression are not well understood. While pharmacological doses of HDAC inhibitors exert positive effects on Bdnf gene transcription, the inhibitors represent small molecules that do not occur in vivo. Here, we report that an endogenous molecule released after exercise is capable of inducing key promoters of the Mus musculus Bdnf gene. The metabolite β-hydroxybutyrate, which increases after prolonged exercise, induces the activities of Bdnf promoters, particularly promoter I, which is activity-dependent. We have discovered that the action of β-hydroxybutyrate is specifically upon HDAC2 and HDAC3, which act upon selective Bdnf promoters. Moreover, the effects upon hippocampal Bdnf expression were observed after direct ventricular application of β-hydroxybutyrate. Electrophysiological measurements indicate that β-hydroxybutyrate causes an increase in neurotransmitter release, which is dependent upon the TrkB receptor. These results reveal an endogenous mechanism to explain how physical exercise leads to the induction of BDNF.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Friday, July 01, 2016
How exercise enhances brain renewal and growth.
Several studies by now have shown that exercises enhances the production of BDNF (Brain Derived Trophic Factor), which leads to the creation of new nerve cells in the hippocampus (essential for learning and memory - for more on the link of exercise to better mental capacity in older people, see Reynolds.). Sleiman et al. now show that in mice the metabolite β-hydroxybutyrate which increases during heavy exercise induces the activity of BNDF gene promoters. Their detailed abstract:
Thursday, June 30, 2016
Cognitive fatigue increases impulsivity.
Sort of makes sense..Blain et al. show that if you use your lateral prefrontal cortex (LPFC) for control process required for an extended intense workday, its function in resisting the temptation of immediate rewards is diminished, you're more likely to eat that late afternoon sugar roll. Their abstract:
The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.
Wednesday, June 29, 2016
The brain makes maps of abstract realms.
Underwood summarizes the importance of recent work by Constantinescu et al.
The brain is a mapmaker. As you navigate a landscape, neurons in a region called the entorhinal cortex fire at multiple locations, marking out a hexagonal grid on a mental map. The discovery of these so-called grid cells, and their role as a neuronal GPS for spatial navigation, won the 2014 Nobel Prize in Physiology or Medicine for Norwegian scientists Edvard Moser and May-Britt Moser. Now, it seems that the brain may make maps of abstract realms, too. A team at the University of Oxford in the United Kingdom provides evidence that gridlike neuronal activity throughout the brain helps people organize nonnavigation knowledge—for the purposes of the new study, differences in body shape between various types of birds. Several brain regions, including the entorhinal cortex and the ventromedial prefrontal cortex, showed gridlike neural representation of conceptual space.The abstract of the article:
It has been hypothesized that the brain organizes concepts into a mental map, allowing conceptual relationships to be navigated in a manner similar to that of space. Grid cells use a hexagonally symmetric code to organize spatial representations and are the likely source of a precise hexagonal symmetry in the functional magnetic resonance imaging signal. Humans navigating conceptual two-dimensional knowledge showed the same hexagonal signal in a set of brain regions markedly similar to those activated during spatial navigation. This gridlike signal is consistent across sessions acquired within an hour and more than a week apart. Our findings suggest that global relational codes may be used to organize nonspatial conceptual representations and that these codes may have a hexagonal gridlike pattern when conceptual knowledge is laid out in two continuous dimensions.
Tuesday, June 28, 2016
Training our brains without our awareness.
There is growing interest in the use of neurofeedback (NF) as a tool to study and treat various clinical conditions. The uses of NF are diverse, ranging across a variety of motor and sensory tasks, investigation of cortical plasticity and attention, to treatment of chronic pain, depression, and mood control. In NF studies participants are typically aware that they are being trained, and received specific goals for this training. Ramos et al. take the important step of showing that targeted brain networks can be modulated even in the complete absence of participants' awareness that a training process is taking place. From their introduction:
...participants were informed that they were engaged in a task aimed at mapping reward networks. Unbeknownst to them, these rewards were coupled with fMRI activations in specific cortical networks. Participants received auditory feedback associated with positive and negative rewards, based on blood oxygen level-dependent (BOLD)–fMRI activity from two well-researched visual regions of interest (ROIs), the fusiform face area (FFA) and the parahippocampal place area (PPA). However, participants were not informed of this procedure and believed, as revealed also by postscan interviews and questionnaires, that the reward was given at random.They found that 10 of 16 participants were indeed able to modulate their brain activity to enhance the positive rewards, and were completely unaware that they were doing this. This ability was associated with changes in connectivity that were apparent in post-training rest sessions, indicating that the network changes resulting from the training carried over beyond the training period itself. The authors point out:
...that brain networks can be modified even in the complete absence of intention and awareness of the learning situation, raising intriguing possibilities for clinical interventions.
Blog Categories:
attention/perception,
memory/learning
Monday, June 27, 2016
Much ado about grit.
Grit seems to have replaced resilience as the psychological virtue du jour, and like resilience (see Sehgal's piece " on "The profound emptiness of resilience") is getting some kick-back, as in this meta-analysis by Credé et al. of the 'grit literature' suggesting that "interventions designed to enhance grit may only have weak effects on performance and success, the construct validity of grit is in question, and the primary utility of the grit construct may lie in the perseverance facet." (Also, you might check out this article by Selingo asking whether 'grit' is overrated in explaining student success.)
Grit has been presented as a higher order personality trait that is highly predictive of both success and performance and distinct from other traits such as conscientiousness. This paper provides a meta-analytic review of the grit literature with a particular focus on the structure of grit and the relation between grit and performance, retention, conscientiousness, cognitive ability, and demographic variables. Our results based on 584 effect sizes from 88 independent samples representing 66,807 individuals indicate that the higher order structure of grit is not confirmed, that grit is only moderately correlated with performance and retention, and that grit is very strongly correlated with conscientiousness. We also find that the perseverance of effort facet has significantly stronger criterion validities than the consistency of interest facet and that perseverance of effort explains variance in academic performance even after controlling for conscientiousness. In aggregate our results suggest that interventions designed to enhance grit may only have weak effects on performance and success, that the construct validity of grit is in question, and that the primary utility of the grit construct may lie in the perseverance facet.
Friday, June 24, 2016
Why do Greek statues have such small penises?
Time for a random wild-card post, passing on Andrew Lear's speculations on Greek statues, as summarized in a piece by Olivia Goldhill. A few clips:
Don’t pretend your eyes don’t hover, at least for a moment, over the delicately sculpted penises on classical nude statues. While it may not sound like the most erudite subject, art historians haven’t completely ignored ancient Greek genitalia either...it turns out there’s a well-developed ideology behind those rather small penises...In ancient Greece, it seems, a small penis was the sought-after look for the alpha male.
“Greeks associated small and non-erect penises with moderation, which was one of the key virtues that formed their view of ideal masculinity,” explains classics professor Andrew Lear, who has taught at Harvard, Columbia and NYU and runs tours focused on gay history. “There is the contrast between the small, non-erect penises of ideal men (heroes, gods, nude athletes etc) and the over-size, erect penises of Satyrs (mythic half-goat-men, who are drunkards and wildly lustful) and various non-ideal men. Decrepit, elderly men, for instance, often have large penises...Similar ideas are reflected in ancient Greek literature, says Lear. For example, in Aristophanes’ Clouds a large penis is listed alongside a “pallid complexion,” a “narrow chest,” and “great lewdness” as one of the characteristics of un-athletic and dishonorable Athenian youths.”
There are several theories as to why the “ideal” penis size developed from small in ancient Greece to large today. Lear suggests that perhaps the rise of porn, or an ideological push to subject men to the same body shaming that women typically face, are behind the modern emphasis on having a large penis...But Lear adds that in both societies, ideas about penis size are completely “unrelated to reality or aesthetics.” Contrary to popular myth, there’s no clear evidence that a large penis correlates with sexual satisfaction. Nor is there proof that a small penis is a sign of moderation and rationality...Society has been transformed in the thousands of years since ancient Greece but, when it comes to penis size, we’ve simply swapped one groundless theory for another.
Thursday, June 23, 2016
Think less, think better.
In an NYTimes piece, Moshe Bar translates the psycho-speak (which caused me to completely miss the point of the work) of an article describing research done with with graduate student Shira Baror. Some clips from the NYTimes pieces, then the article abstract:
Shira Baror and I demonstrate that the capacity for original and creative thinking is markedly stymied by stray thoughts, obsessive ruminations and other forms of “mental load.” Many psychologists assume that the mind, left to its own devices, is inclined to follow a well-worn path of familiar associations. But our findings suggest that innovative thinking, not routine ideation, is our default cognitive mode when our minds are clear.
...we gave participants a free-association task while simultaneously taxing their mental capacity to different degrees...they were given a word (e.g., shoe) and asked to respond as quickly as possible with the first word that came to mind (e.g., sock)...We found that a high mental load consistently diminished the originality and creativity of the response: Participants with seven digits to recall resorted to the most statistically common responses (e.g., white/black), whereas participants with two digits gave less typical, more varied pairings (e.g., white/cloud).
In another experiment, we found that longer response times were correlated with less diverse responses, ruling out the possibility that participants with low mental loads simply took more time to generate an interesting response. Rather, it seems that with a high mental load, you need more time to generate even a conventional thought. These experiments suggest that the mind’s natural tendency is to explore and to favor novelty, but when occupied it looks for the most familiar and inevitably least interesting solution...Our study suggests that your internal exploration is too often diminished by an overly occupied mind, much as is the case with your experience of your external environment.After practicing vipassana meditation:
My thoughts — when I returned to the act of thinking about something rather than nothing — were fresher and more surprising...It is clear to me that this ancient meditative practice helps free the mind to have richer experiences of the present. Except when you are flying an F-16 aircraft or experiencing extreme fear or having an orgasm, your life leaves too much room for your mind to wander. As a result, only a small fraction of your mental capacity remains engaged in what is before it, and mind-wandering and ruminations become a tax on the quality of your life. Honing an ability to unburden the load on your mind, be it through meditation or some other practice, can bring with it a wonderfully magnified experience of the world — and, as our study suggests, of your own mind.Now, here is the abstract of their article titled "Associative Activation and Its Relation to Exploration and Exploitation in the Brain," which, when I first scanned it, gave me no clue of the exegesis above!
Associative activation is commonly assumed to rely on associative strength, such that if A is strongly associated with B, B is activated whenever A is activated. We challenged this assumption by examining whether the activation of associations is state dependent. In three experiments, subjects performed a free-association task while the level of a simultaneous load was manipulated in various ways. In all three experiments subjects in the low-load conditions provided significantly more diverse and original associations compared with subjects in the high-load conditions, who exhibited high consensus. In an additional experiment, we found increased semantic priming of immediate associations under high load and of remote associations under low load. Taken together, these findings imply that activation of associations is an exploratory process by default, but is narrowed to exploiting the more immediate associations under conditions of high load. We propose a potential mechanism for processing associations in exploration and in exploitation modes, and suggest clinical implications.
Blog Categories:
attention/perception,
meditation,
psychology
Wednesday, June 22, 2016
Smartphone Era Politics
Selections from one of Roger Cohen's many intelligent essays in The New York Times:
The time has come for a painful confession: I have spent my life with words, yet I am illiterate...I do not have the words to be at ease in this world of steep migration from desktop to mobile, of search-engine optimization, of device-agnostic bundles, of cascading metrics and dashboards and buckets, of post-print onboarding and social-media FOMO (fear of missing out).
I was more at home with the yarn du jour. Jour was once an apt first syllable for the word journalism; hour would now be more appropriate...That was in the time of distance. Disconnection equaled immersion. Today, connection equals distraction...
We find ourselves at a pivot point. How we exist in relation to one another is in the midst of radical redefinition, from working to flirting. The smartphone is a Faustian device, at once liberation and enslavement. It frees us to be anywhere and everywhere — and most of all nowhere. It widens horizons. It makes those horizons invisible. Upright homo sapiens, millions of years in the making, has yielded in a decade to the stooped homo sapiens of downward device-dazzled gaze.
Perhaps this is how the calligrapher felt after 1440, when it began to be clear what Gutenberg had wrought. A world is gone. Another, as poor Jeb Bush (!) discovered, is being born — one where words mean everything and the contrary of everything, where sentences have lost their weight, where volume drowns truth.
You have to respect American voters. They are changing the lexicon in their anger with the status quo. They don’t care about consistency. They care about energy. Reasonableness dies. Provocation works. Whether you are for or against something, or both at the same time, is secondary to the rise your position gets. Our times are unpunctuated. Politics, too, has a new language, spoken above all by the Republican front-runner as he repeats that, “There is something going on.”...This appears to be some form of addictive delirium. It is probably dangerous in some still unknowable way.
Technology has upended not only newspapers. It has upended language itself, which is none other than a community’s system of communication. What is a community today? (One thing young people don't do on their smartphones is actually talk to each other.) Can there be community at all with downward gazes? I am not sure. But I am certain that cross-platform content has its beauty and its promise if only I could learn the right words to describe them.
Tuesday, June 21, 2016
Confronting the prejudiced brain
I want to pass on a clip from an essay published on the greater good science center's website. The article is worth reading in its entirety, and you should consider clicking around the greater good website to check other articles on core themes like compassion, empathy, altruism, gratitude, etc.
Recent studies using sophisticated brain imaging techniques have offered an unprecedented glimpse into the psychology of prejudice, and the results aren’t always pretty. In research by Princeton psychology professor Susan Fiske, for instance, white study participants were shown photographs of white and black faces while a functional magnetic resonance imaging scanner captured their brain activity. When asked a seemingly harmless question about the age of the face shown, participants’ brain activity spiked in a region known as the amygdala, which is involved in the fear response—it lights up when we encounter threats.
Yet when participants were asked to guess the favorite vegetable of each individual pictured, amygdala activity was no more stimulated by black faces than it was by white ones. In other words, when the study participants had to group people into a social category—even if it was by age rather than race—their brains reacted more negatively to black faces than to white faces. But when they were encouraged to see everyone as individuals with their own tastes and feelings—tastes and feelings just like the ones they have themselves—their reactions to black faces and white faces didn’t differ. Their fear dissolved as they no longer saw the black faces as others.
This research shows how it’s possible to shift perceptions of in-groups and out-groups in a lab. But how do we do this in real life? One of social psychology’s most influential theories about incorporating others into the in-group is called the contact hypothesis. Formulated by Harvard social psychologist Gordon Allport in the 1940s, the theory itself is straightforward: Increasing exposure to out-group members will improve attitudes toward that group and decrease prejudice and stereotyping. By no means an idealist, Allport recognized that contact between groups can also devolve into conflict and further reinforce negative attitudes. In fact, he argued that contact would bring positive results only when four specific conditions were in place for the groups involved:
-the support of legitimate authorities;
-common goals;
-a sense of interdependency that provides an incentive to cooperate; and
-a sense of having of equal status.
Blog Categories:
fear/anxiety/stress,
happiness,
social cognition
Monday, June 20, 2016
Predicting whether you are going to vote.
An interesting nugget from Rogers et al., who find that callers to potential voters are more likely to correctly predict whether the person called will vote than the person called is:
People are regularly asked to report on their likelihoods of carrying out consequential future behaviors, including complying with medical advice, completing educational assignments, and voting in upcoming elections. Despite these stated self-predictions being notoriously unreliable, they are used to inform many strategic decisions. We report two studies examining stated self-prediction about whether citizens will vote. We find that most self-predicted voters do not actually vote despite saying they will, and that campaign callers can discern which self-predicted voters will not actually vote. In study 1 (n = 4,463), self-predicted voters rated by callers as “100% likely to vote” were 2 times more likely to actually vote than those rated unlikely to vote. Study 2 (n = 3,064) replicated this finding and further demonstrated that callers’ prediction accuracy was mediated by citizens’ nonverbal signals of uncertainty and deception. Strangers can use nonverbal signals to improve predictions of follow through on self-reported intentions—an insight of potential value for politics, medicine, and education.
Friday, June 17, 2016
Yes, there have been aliens.
This post falls under the "random curious stuff" category in MindBlog's description line. Adam Frank gives an interesting account of work he has published with Woodruff Sullivan arguing that we now have enough information to conclude that alien civilizations have almost certainly existed at some point in cosmic history.
Among scientists, the probability of the existence of an alien society with which we might make contact is discussed in terms of something called the Drake equation.... Drake identified seven factors on which that number would depend, and incorporated them into an equation.
The first factor was the number of stars born each year. The second was the fraction of stars that had planets. After that came the number of planets per star that traveled in orbits in the right locations for life to form (assuming life requires liquid water). The next factor was the fraction of such planets where life actually got started. Then came factors for the fraction of life-bearing planets on which intelligence and advanced civilizations (meaning radio signal-emitting) evolved. The final factor was the average lifetime of a technological civilization.
In 1961, only the first factor — the number of stars born each year — was understood. And that level of ignorance remained until very recently...Three of the seven terms in Drake’s equation are now known. We know the number of stars born each year. We know that the percentage of stars hosting planets is about 100. And we also know that about 20 to 25 percent of those planets are in the right place for life to form. This puts us in a position, for the first time, to say something definitive about extraterrestrial civilizations — if we ask the right question.
In our recent paper, Professor Sullivan and I did this by shifting the focus of Drake’s equation. Instead of asking how many civilizations currently exist, we asked what the probability is that ours is the only technological civilization that has ever appeared. By asking this question, we could bypass the factor about the average lifetime of a civilization. This left us with only three unknown factors, which we combined into one “biotechnical” probability: the likelihood of the creation of life, intelligent life and technological capacity.
You might assume this probability is low, and thus the chances remain small that another technological civilization arose. But what our calculation revealed is that even if this probability is assumed to be extremely low, the odds that we are not the first technological civilization are actually high. Specifically, unless the probability for evolving a civilization on a habitable-zone planet is less than one in 10 billion trillion, then we are not the first.
To give some context for that figure: In previous discussions of the Drake equation, a probability for civilizations to form of one in 10 billion per planet was considered highly pessimistic. According to our finding, even if you grant that level of pessimism, a trillion civilizations still would have appeared over the course of cosmic history.
In other words, given what we now know about the number and orbital positions of the galaxy’s planets, the degree of pessimism required to doubt the existence, at some point in time, of an advanced extraterrestrial civilization borders on the irrational.
Thursday, June 16, 2016
The mistrust of science
Atul Gawande offers another fascinating essay, in the form of his commencement speech at the California Institute of Technology on June 10. A few clips:
The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).
The sociologist Gordon Gauchat studied U.S. survey data from 1974 to 2010 and found some deeply alarming trends. Despite increasing education levels, the public’s trust in the scientific community has been decreasing. This is particularly true among conservatives, even educated conservatives. In 1974, conservatives with college degrees had the highest level of trust in science and the scientific community. Today, they have the lowest.
Today, we have multiple factions putting themselves forward as what Gauchat describes as their own cultural domains, “generating their own knowledge base that is often in conflict with the cultural authority of the scientific community.” Some are religious groups (challenging evolution, for instance). Some are industry groups (as with climate skepticism). Others tilt more to the left (such as those that reject the medical establishment). As varied as these groups are, they are all alike in one way. They all harbor sacred beliefs that they do not consider open to question.
Science’s defenders have identified five hallmark moves of pseudoscientists. They argue that the scientific consensus emerges from a conspiracy to suppress dissenting views. They produce fake experts, who have views contrary to established knowledge but do not actually have a credible scientific track record. They cherry-pick the data and papers that challenge the dominant view as a means of discrediting an entire field. They deploy false analogies and other logical fallacies. And they set impossible expectations of research: when scientists produce one level of certainty, the pseudoscientists insist they achieve another.
The challenge of what to do about this—how to defend science as a more valid approach to explaining the world—has actually been addressed by science itself. Scientists have done experiments. In 2011, two Australian researchers compiled many of the findings in “The Debunking Handbook.” The results are sobering. The evidence is that rebutting bad science doesn’t work; in fact, it commonly backfires. Describing facts that contradict an unscientific belief actually spreads familiarity with the belief and strengthens the conviction of believers. That’s just the way the brain operates; misinformation sticks, in part because it gets incorporated into a person’s mental model of how the world works. Stripping out the misinformation therefore fails, because it threatens to leave a painful gap in that mental model—or no model at all.
Emerging from the findings was also evidence that suggested how you might build trust in science. Rebutting bad science may not be effective, but asserting the true facts of good science is. And including the narrative that explains them is even better. You don’t focus on what’s wrong with the vaccine myths, for instance. Instead, you point out: giving children vaccines has proved far safer than not. How do we know? Because of a massive body of evidence, including the fact that we’ve tried the alternate experiment before. Between 1989 and 1991, vaccination among poor urban children in the U.S. dropped. And the result was fifty-five thousand cases of measles and a hundred and twenty-three deaths.
Knowledge and the virtues of the scientific orientation live far more in the community than the individual. When we talk of a “scientific community,” we are pointing to something critical: that advanced science is a social enterprise, characterized by an intricate division of cognitive labor. Individual scientists, no less than the quacks, can be famously bull-headed, overly enamored of pet theories, dismissive of new evidence, and heedless of their fallibility. (Hence Max Planck’s observation that science advances one funeral at a time.) But as a community endeavor, it is beautifully self-correcting.
Beautifully organized, however, it is not. Seen up close, the scientific community—with its muddled peer-review process, badly written journal articles, subtly contemptuous letters to the editor, overtly contemptuous subreddit threads, and pompous pronouncements of the academy— looks like a rickety vehicle for getting to truth. Yet the hive mind swarms ever forward. It now advances knowledge in almost every realm of existence—even the humanities, where neuroscience and computerization are shaping understanding of everything from free will to how art and literature have evolved over time.
The mistake...is to believe that the educational credentials you get... give you any special authority on truth. What you have gained is far more important: an understanding of what real truth-seeking looks like. It is the effort not of a single person but of a group of people—the bigger the better—pursuing ideas with curiosity, inquisitiveness, openness, and discipline. As scientists, in other words.
Even more than what you think, how you think matters. The stakes for understanding this could not be higher than they are today, because we are not just battling for what it means to be scientists. We are battling for what it means to be citizens.
Wednesday, June 15, 2016
Our (Bare) Shelves, Our Selves
As is the case with many people moving through their 70's, I am having to downsize my surroundings. An 1861 stone schoolhouse converted to a residence that has been my Madison WI home for the past 26 years is going on the market next week as my husband and I contract into a smaller condo near the university for the 4-5 summer months we spend away from Fort Lauderdale. Old record, tape, CD, and book collections that have been a part of my extended ego are being discarded or massively downsized. It feels like a series of amputations, even though for years all my reading and music listening have not required any of these objects. Rather, they are being downloaded (Amazon Kindle, iPad) or streamed from the internet (Apple Music,Pandora, Google Play, etc.). My valued music CDs have been transferred to iTunes. The visual richness and emotions evoked by the history of my filled book shelves is hardly matched by the two devices that can now perform their functions, an iPad and a wireless speaker.
This feeling of loss is why a recent Op-Ed piece by Teddy Wayne having the title of this post resonated with me. The transition I am describing is occurring in the homes of children growing up with parents who have moved from books and CDs to Kindle and streaming. In such settings there can be fewer random walks through a book, record, or CD collection that find novel material, you're looking more for what you think you want. The final paragraphs of Wayne's essay:
Poking through physical artifacts, as I did with those Beatles records, is archival and curatorial; it forces you to examine each object slowly, perhaps sample it and come across a serendipitous discovery.
Scrolling through file names on a device, on the other hand, is what we do all day long, often mindlessly, in our quest to find whatever it is we’re already looking for as rapidly as possible. To see “The Beatles” in a list of hundreds of artists in an iTunes database is not nearly as arresting as holding the album cover for “Sgt. Pepper’s Lonely Hearts Club Band.”
Consider the difference between listening to music digitally versus on a record player or CD. On the former, you’re more likely to download or stream only the singles you want to hear from an album. The latter requires enough of an investment — of acquiring it, but also of energy in playing it — that you stand a better chance of committing and listening to the entire album.
If I’d merely clicked on the first MP3 track of “Sgt. Pepper’s” rather than removed the record from its sleeve, placed it in the phonograph and carefully set the needle over it, I may have become distracted and clicked elsewhere long before the B-side “Lovely Rita” played.
And what of sentiment? Jeff Bezos himself would have a hard time defending the nostalgic capacity of a Kindle .azw file over that of a tattered paperback. Data files can’t replicate the lived-in feel of a piece of beloved art. To a child, a parent’s dog-eared book is a sign of a mind at work and of the personal significance of that volume.
A crisp JPEG of the cover design on a virtual shelf, however, looks the same whether it’s been reread 10 times or not at all. If, that is, it’s ever even seen.
Tuesday, June 14, 2016
Vision reconstructs causal history from static shapes.
From Chen and Scholl:
The perception of shape, it has been argued, also often entails the perception of time. A cookie missing a bite, for example, is seen as a whole cookie that was subsequently bitten. It has never been clear, however, whether such observations truly reflect visual processing. To explore this possibility, we tested whether the perception of history in static shapes could actually induce illusory motion perception. Observers watched a square change to a truncated form, with a “piece” of it missing, and they reported whether this change was sudden or gradual. When the contours of the missing piece suggested a type of historical “intrusion” (as when one pokes a finger into a lump of clay), observers actually saw that intrusion occur: The change appeared to be gradual even when it was actually sudden, in a type of transformational apparent motion. This provides striking phenomenological evidence that vision involves reconstructing causal history from static shapes.
Monday, June 13, 2016
A postdictive illusion of choice.
Bear and Bloom report a simple experiment showing how we can feel as if we make a choice before the time at which this choice is actually made.
Do people know when, or whether, they have made a conscious choice? Here, we explore the possibility that choices can seem to occur before they are actually made. In two studies, participants were asked to quickly choose from a set of options before a randomly selected option was made salient. Even when they believed that they had made their decision prior to this event, participants were significantly more likely than chance to report choosing the salient option when this option was made salient soon after the perceived time of choice. Thus, without participants’ awareness, a seemingly later event influenced choices that were experienced as occurring at an earlier time. These findings suggest that, like certain low-level perceptual experiences, the experience of choice is susceptible to “postdictive” influence and that people may systematically overestimate the role that consciousness plays in their chosen behavior.From their text:
In the first experiment participants viewed five white circles that appeared in random positions on a computer screen and were asked to try to quickly choose one of these circles “in their head” before one of the circles turned red. After a circle turned red, participants indicated whether they had chosen the red circle, had chosen a circle that did not turn red, or had not had enough time to choose a circle before one of them turned red.
Because the red circle is selected randomly on all trials, people performing this task should choose the red circle on approximately 20% of the trials in which they claim to have had time to make a choice if they are, in fact, making their choices before a circle turns red (and they are not biased to report choosing the red circle for some other reason). In contrast, a postdictive model predicts that people could consciously experience having made a choice before a circle turned red even though this choice did, in fact, occur after a circle turned red and was influenced by that event. Specifically, this could happen if a circle turns red soon enough to bias a person’s choice unconsciously (e.g., by subliminally capturing visual attention..), but this person completes the choice before becoming conscious of the circle’s turning red. On the other hand, if there is a relatively long delay until a circle turns red, a person would be more likely to have finished making a choice before even unconsciously processing a circle’s turning red; hence, this event would be less likely to bias the choice.
In a second experiment, we explored whether postdiction could occur in a slightly different paradigm, in which participants chose one of two different-colored circles. We used two, rather than five, choice options in this experiment to control for a worry that the time-dependent bias we observed in Experiment 1 could have been driven by low-confidence responding. If participants were less confident in choices they made more quickly, they might have been prone to choose relatively randomly between the “y” and “n” response options in short-delay trials. Such a random pattern of responding would have biased participants’ reports of choosing the red circle toward .50 (because there were only two response options), and would have resulted in greater-than-chance reports of choosing the red circle for these shorter delays (because chance was .20 in Experiment 1). By making chance .50 in this experiment, we eliminated any concern that random responding could yield the time-dependent pattern of bias that we observed in Experiment 1.
Blog Categories:
acting/choosing,
attention/perception
Friday, June 10, 2016
Reducing stress induced inflammatory disease with bacteria.
Interesting work from Reber et al.:
Significance
Significance
The hygiene, or “old friends,” hypothesis proposes that lack of exposure to immunoregulatory microorganisms in modern urban societies is resulting in an epidemic of inflammatory disease, as well as psychiatric disorders in which chronic, low-level inflammation is a risk factor. An important determinant of immunoregulation is the microbial community occupying the host organism, collectively referred to as the microbiota. Here we show that stress disrupts the homeostatic relationship between the microbiota and the host, resulting in exaggerated inflammation. Treatment of mice with a heat-killed preparation of an immunoregulatory environmental microorganism, Mycobacterium vaccae, prevents stress-induced pathology. These data support a strategy of “reintroducing” humans to their old friends to promote optimal health and wellness.Abstract
The prevalence of inflammatory diseases is increasing in modern urban societies. Inflammation increases risk of stress-related pathology; consequently, immunoregulatory or antiinflammatory approaches may protect against negative stress-related outcomes. We show that stress disrupts the homeostatic relationship between the microbiota and the host, resulting in exaggerated inflammation. Repeated immunization with a heat-killed preparation of Mycobacterium vaccae, an immunoregulatory environmental microorganism, reduced subordinate, flight, and avoiding behavioral responses to a dominant aggressor in a murine model of chronic psychosocial stress when tested 1–2 wk following the final immunization. Furthermore, immunization with M. vaccae prevented stress-induced spontaneous colitis and, in stressed mice, induced anxiolytic or fear-reducing effects as measured on the elevated plus-maze, despite stress-induced gut microbiota changes characteristic of gut infection and colitis. Immunization with M. vaccae also prevented stress-induced aggravation of colitis in a model of inflammatory bowel disease. Depletion of regulatory T cells negated protective effects of immunization with M. vaccae on stress-induced colitis and anxiety-like or fear behaviors. These data provide a framework for developing microbiome- and immunoregulation-based strategies for prevention of stress-related pathologies.
Thursday, June 09, 2016
Unethical amnesia
From Kouchaki and Gino:
Despite our optimistic belief that we would behave honestly when facing the temptation to act unethically, we often cross ethical boundaries. This paper explores one possibility of why people engage in unethical behavior over time by suggesting that their memory for their past unethical actions is impaired. We propose that, after engaging in unethical behavior, individuals’ memories of their actions become more obfuscated over time because of the psychological distress and discomfort such misdeeds cause. In nine studies (n = 2,109), we show that engaging in unethical behavior produces changes in memory so that memories of unethical actions gradually become less clear and vivid than memories of ethical actions or other types of actions that are either positive or negative in valence. We term this memory obfuscation of one’s unethical acts over time “unethical amnesia.” Because of unethical amnesia, people are more likely to act dishonestly repeatedly over time.
Wednesday, June 08, 2016
The attention economy
I pass on some clips from an essay by Tom Chatfield:
How many other things are you doing right now while you’re reading this piece? Are you also checking your email, glancing at your Twitter feed, and updating your Facebook page? What five years ago David Foster Wallace labelled ‘Total Noise’ — ‘the seething static of every particular thing and experience, and one’s total freedom of infinite choice about what to choose to attend to’ — is today just part of the texture of living on a planet that will, by next year, boast one mobile phone for each of its seven billion inhabitants. We are all amateur attention economists, hoarding and bartering our moments…
Much as corporations incrementally improve the taste, texture and sheer enticement of food and drink by measuring how hard it is to stop eating and drinking them, the actions of every individual online are fed back into measures where moreinexorably means better: more readers, more viewers, more exposure, more influence, more ads, more opportunities to unfurl the integrated apparatus of gathering and selling data. Attention, thus conceived, is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.
There’s a reductive exaltation in defining attention as the contents of a global reservoir, slopping interchangeably between the brains of every human being alive. Where is the space, here, for the idea of attention as a mutual construction more akin to empathy than budgetary expenditure — or for those unregistered moments in which we attend to ourselves, to the space around us, or to nothing at all?
From the loftiest perspective of all, information itself is pulling the strings: free-ranging memes whose ‘purposes’ are pure self-propagation, and whose frantic evolution outstrips all retrospective accounts…consider yourself as interchangeable as the button you’re clicking, as automated as the systems in which you’re implicated. Seen from such a height, you signify nothing beyond your recorded actions…in making our attentiveness a fungible asset, we’re not so much conjuring currency out of thin air as chronically undervaluing our time.
We watch a 30-second ad in exchange for a video; we solicit a friend’s endorsement; we freely pour sentence after sentence, hour after hour, into status updates and stock responses. None of this depletes our bank balances. Yet its cumulative cost, while hard to quantify, affects many of those things we hope to put at the heart of a happy life: rich relationships, rewarding leisure, meaningful work, peace of mind.
What kind of attention do we deserve from those around us, or owe to them in return? What kind of attention do we ourselves deserve, or need, if we are to be ‘us’ in the fullest possible sense? These aren’t questions that even the most finely tuned popularity contest can resolve. Yet, if contentment and a sense of control are partial measures of success, many of us are selling ourselves far too cheap.
Blog Categories:
attention/perception,
culture/politics,
technology
Tuesday, June 07, 2016
A redefinition of health and well-being for older adults
McClintock et al. take a more comprehensive approach to defining health and find some interesting new categories. The healthiest people are obese and robust!
Significance
Significance
Health has long been conceived as not just the absence of disease but also the presence of physical, psychological, and social well-being. Nonetheless, the traditional medical model focuses on specific organ system diseases. This representative study of US older adults living in their homes amassed not only comprehensive medical information but also psychological and social data and measured sensory function and mobility, all key factors for independent living and a gratifying life. This comprehensive model revealed six unique health classes, predicting mortality/incapacity. The healthiest people were obese and robust; two new classes, with twice the mortality/incapacity, were people with healed broken bones or poor mental health. This approach provides an empirical method for broadly reconceptualizing health, which may inform health policy.Abstract
The World Health Organization (WHO) defines health as a “state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” Despite general acceptance of this comprehensive definition, there has been little rigorous scientific attempt to use it to measure and assess population health. Instead, the dominant model of health is a disease-centered Medical Model (MM), which actively ignores many relevant domains. In contrast to the MM, we approach this issue through a Comprehensive Model (CM) of health consistent with the WHO definition, giving statistically equal consideration to multiple health domains, including medical, physical, psychological, functional, and sensory measures. We apply a data-driven latent class analysis (LCA) to model 54 specific health variables from the National Social Life, Health, and Aging Project (NSHAP), a nationally representative sample of US community-dwelling older adults. We first apply the LCA to the MM, identifying five health classes differentiated primarily by having diabetes and hypertension. The CM identifies a broader range of six health classes, including two “emergent” classes completely obscured by the MM. We find that specific medical diagnoses (cancer and hypertension) and health behaviors (smoking) are far less important than mental health (loneliness), sensory function (hearing), mobility, and bone fractures in defining vulnerable health classes. Although the MM places two-thirds of the US population into “robust health” classes, the CM reveals that one-half belong to less healthy classes, independently associated with higher mortality. This reconceptualization has important implications for medical care delivery, preventive health practices, and resource allocation.
Monday, June 06, 2016
Why do we feel awe?
I want to point to an article by Dacher Keltner on the functions of awe that appeared on the Slate website, along with others sponsored by the John Templeton Foundation. Here are clips describing a few of the experiments he mentions.
A new science is now asking “Why awe?” This is a question we can approach in two ways. First we can consider the long, evolutionary view: Why did awe became part of our species’ emotional repertoire during seven million years of hominid evolution? A preliminary answer is that awe binds us to social collectives and enables us to act in more collaborative ways that enable strong groups, thus improving our odds for survival.
For example, in one study from our Berkeley lab, my colleague Michelle Shiota had participants fill in the blank of the following phrase: “ I AM ____.” They did so 20 times, either while standing before an awe-inspiring replica of a T. rex skeleton in UC Berkeley’s Museum of Paleontology or in the exact same place but oriented to look down a hallway, away from the T. rex. Those looking at the dinosaur were more likely to define their individual selves in collectivist terms—as a member of a culture, a species, a university, a moral cause. Awe embeds the individual self in a social identity.
Near Berkeley’s Museum of Paleontology stands a grove of eucalyptus trees, the tallest in North America. When you gaze up at these trees, with their peeling bark and surrounding nimbus of grayish green light, goosebumps may ripple down your neck, a sure sign of awe...my colleague Paul Piff staged a minor accident near that grove to see if awe would prompt greater kindness...Participants first either looked up into the tall trees for one minute—long enough for them to report being filled with awe—or oriented 90 degrees away to look up at the facade of a large science building. They then encountered a person who stumbled, dropping a handful of pens into the dirt. Sure enough, the participants who had been gazing up at the awe-inspiring trees picked up more pens. Experiencing awe seemed to make them more inclined to help someone in need. They also reported feeling less entitled and self-important than the other study participants did.
Subscribe to:
Posts (Atom)