Wednesday, November 11, 2015

Trusting robots, but not androids

Gilbert Chin points to work by Mathur and Reichling in the Journal Cognition.
Robots collect warehoused books, weld car parts together, and vacuum floors. As the number of android robots increases, however, concerns about the “uncanny valley” phenomenon—that people dislike a vaguely human-like robot more than either a machine-like robot or a real human—remain. Mathur and Reichling revisited whether human reactions to android robots exhibit an uncanny valley effect, using a set of 80 robot head shots gathered from the Internet and a systematically morphed set of six images extending from entirely robot to entirely human. Humans did adhere to the uncanny valley curve when rating the likeability of both sets of faces; what's more, this curve also described the extent to which those faces were trusted.
Here's the summary from the paper:

Highlights
• Likability ratings of a large sample of real robot faces had a robust Uncanny Valley.
• Digitally composed robot face series demonstrated a similar Uncanny Valley.
• The Uncanny Valley may subtly alter humans’ trusting behavior toward robot partners.
• Category confusion may occur in the Uncanny Valley but did not mediate the effect. 

Abstract
Android robots are entering human social life. However, human–robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects’ explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An “investment game” showed that the UV penetrated even more deeply to influence subjects’ implicit decisions concerning robots’ social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human–robot social interaction, robust UV effects pose a formidable android-specific problem.

Tuesday, November 10, 2015

The unknowns of cognitive enhancement

Martha Farah points out how little is known about current methods of cognitive enhancement, and suggests several reasons why we are so ignorant. A few clips from her article:
...stimulants such as amphetamine and methylphenidate (sold under trade names such as Adderall and Ritalin, respectively) are widely used for nonmedical reasons …cognitive enhancement with stimulants is commonplace on college campuses…use by college faculty and other professionals to enhance workplace productivity has been documented…The published literature includes substantially different estimates of the effectiveness of prescription stimulants as cognitive enhancers. A recent meta-analysis suggests that the effect is most likely real but small for executive function tests stressing inhibitory control, and probably nonexistent for executive function tests stressing working memory.
Farah notes several studies suggesting that the effects of Adderall and another drug, modafinil (trade name Provigil) on ‘cognitive enhancement’ are actually effects on task motivation and mood.
The newest trend in cognitive enhancement is the use of transcranial electric stimulation. In the most widely used form, called transcranial direct current stimulation (tDCS), a weak current flows between an anode and a cathode placed on the head, altering the resting potential of neurons in the current's path….Transcranial electric stimulation is expanding …with new companies selling compact, visually appealing, user-friendly devices…published literature includes a mix of findings. One recent attempt to synthesize the literature with meta-analysis concluded that tDCS has no effect whatsoever on a wide range of cognitive abilities.
Why are we so ignorant about cognitive enhancement? Several factors seem to be at play. The majority of studies on enhancement effectiveness have been carried out on small samples, rarely more than 50 subjects, which limits their power. Furthermore, cognitive tasks typically lend themselves to a variety of different but reasonable outcome measures, such as overall errors, specific types of errors (for example, false alarms), and response times. In addition, there is usually more than one possible statistical approach to analyze the enhancement effect. Small samples and flexibility in design and analysis raise the likelihood of published false positives. In addition, pharmacologic and electric enhancements may differ in effectiveness depending on the biological and psychological traits of the user, which complicates the effort to understand the true enhancement potential of these technologies. Industry is understandably unmotivated to take on the expense of appropriate large-scale trials of enhancement, given that the stimulants used are illegally diverted and transcranial electric stimulation devices can be sold without such evidence. The inferential step from laboratory effect to real-world benefit adds another layer of challenge. Given that enhancements would likely be used for years, long-term effectiveness and safety are essential concerns but are particularly difficult and costly to determine. As a result, the only large-scale trial we may see is the enormous but uncontrolled and poorly monitored trial of people using these drugs and devices on their own.

Monday, November 09, 2015

Can we really change our aging?

I thought I would point MindBlog readers to a brief talk I gave, "Can we really change our aging?," at the Nov. 1, 2015 meeting of the Fort Lauderdale Prime Timers, and a Nov. 7 Lunch and Learn session of SAGE South Florida. It distills the contents of about 250 MindBlog posts I’ve written describing research on aging, and passes on some of the facts I think are most striking.

Friday, November 06, 2015

Critical period for visual pathway formation? - another dogma bites the dust.

India, which may have the largest number of blind children in the world, with estimates ranging from 360,000 to nearly 1.2 million, is providing a vast laboratory that has overturned one of the central dogmas of brain development - that development of visual (and other) pathways must take place within a critical time window, after which formation of proper connections becomes much more difficult or impossible. Until recently, children over 8 years old with congenital cataracts were not considered appropriate subjects for lens replacement surgery. In Science Magazine Rhitu Chatterjee describes a project begun in 2004, Led by neuroscientist Pawan Sinha, that has restored sight to much older children. The story of one 18-year old is followed, who over the 18 months following lens replacement begin to see with clarity that permitted him to bike through a crowded marketplace.

Of the nearly 500 children and young adults that have undergone cataract operation, about half became research subjects. One fascinating result that emerged is that visual experience isn't critical for certain visual function, the brain seems to be prewired, for example, to be fooled by some visual illusions that were thought to be a product of learning. One is the Ponzo illusion, which typically involves lines converging on the horizon (like train tracks) and two short parallel lines cutting across them. Although the horizontal lines are identical, the one nearer the horizon looks longer. If the Ponzo illusion were the result of visual learning, newly sighted kids wouldn't fall for it. But in fact, children who had just had their vision restored were just as susceptible to the Ponzo illusion as were control subjects with normal vision. The kids also fell for the Müller-Lyer illusion, a pair of lines with arrowheads on both ends; one set of arrowheads points outward, the other inward toward the line. The line with the inward arrowheads seems longer. These results lead Sinha to suggest that the illusion is being driven by very simple factors in the image that the brain is probably innately programmed to respond to.

Thursday, November 05, 2015

A biomarker for early detection of dementia

Kunz et al. show that in an at-risk group for developing Alzheimer's different brain signals are detected many decades before onset of the disease. Individuals showing this change would be candidates for starting therapy at early stages of the disease.
Alzheimer’s disease (AD) manifests with memory loss and spatial disorientation. AD pathology starts in the entorhinal cortex, making it likely that local neural correlates of spatial navigation, particularly grid cells, are impaired. Grid-cell–like representations in humans can be measured using functional magnetic resonance imaging. We found that young adults at genetic risk for AD (APOE-ε4 carriers) exhibit reduced grid-cell–like representations and altered navigational behavior in a virtual arena. Both changes were associated with impaired spatial memory performance. Reduced grid-cell–like representations were also related to increased hippocampal activity, potentially reflecting compensatory mechanisms that prevent overt spatial memory impairment in APOE-ε4 carriers. Our results provide evidence of behaviorally relevant entorhinal dysfunction in humans at genetic risk for AD, decades before potential disease onset.

Wednesday, November 04, 2015

Lifting weights and the brain.

Reynolds points to a study suggesting that light weight training slows down the shrinkage and tattering of our brain's white matter (nerve tracts) that normally occurs with aging. And, from the New Yorker:


Tuesday, November 03, 2015

Brain Pickings on 'the most important things'

I enjoy the weekly email sent out by Maria Popova's Brain Pickings website. I find it a bit overwhelming (and high on the estrogens), and so sample only a few of the idea chunks it presents. I suggest you have a look. On its 9th birthday, Brain Pickings noted the "9 most important things I have learned":
1.  Allow yourself the uncomfortable luxury of changing your mind.
2.  Do nothing for prestige or status or money or approval alone.
3. Be generous.
4. Build pockets of stillness into your life.
5. When people try to tell you who you are, don’t believe them.
6. Presence is far more intricate and rewarding an art than productivity.
7. Expect anything worthwhile to take a long time.
8. Seek out what magnifies your spirit.
9. Don’t be afraid to be an idealist.

Monday, November 02, 2015

A lab experiment: visibility of wealth increases wealth inequality

Nishi et al. do a fascinating laboratory experiment conducted online showing that when people can see wealth inequality in their social network, this propels further inequality through reduced cooperation and reduced social connectivity. From a summary by Gächter
Nishi and colleagues' experimental model used an assessment of people's willingness to contribute to public goods to test how initial wealth inequality and the structure of the social network influence the evolution of inequality...can mere observation of your neighbour's wealth lead to more inequality over time, even if such information does not change economic incentives? Visible wealth might have a psychological effect by triggering social comparisons and thereby influencing economic choices that have repercussions for inequality.
...the researchers endowed all participants with tokens, worth real money...in a treatment without inequality, all participants initially received the same number of tokens; in a low-inequality treatment, participants had similar but different initial endowments; and in the high-inequality treatment there was a substantial starting difference between participants...A crucial manipulation in this experiment was wealth visibility. Under invisible conditions, the participants could observe only their own accumulated wealth. Under visibility, they could see the accumulated wealth of their connected neighbours but not the whole network....
The groups typically comprised 17 people arranged at random in a social network in which, on average, about 5 people were linked ('neighbours'). In each of the 10 rounds of the following game, participants had to decide whether to behave pro-socially ('cooperate') by reducing their own wealth by 50 tokens per connected neighbour to benefit each of them by 100 tokens, or to behave pro-selfishly ('defect') by keeping their tokens for themselves. These decisions had consequences for accumulated wealth levels and inequality. At the end of each round, the subjects learnt whether their neighbours had cooperated or defected and 30% of participants were given the opportunity to change their neighbour, that is, to either sever an existing link or to create a new one.
The authors find that, under high initial wealth inequality, visibility of neighbours' accumulated wealth increases inequality over time relative to the invisibility condition.
Here is the abstract from Nishi et al.:
Humans prefer relatively equal distributions of resources, yet societies have varying degrees of economic inequality. To investigate some of the possible determinants and consequences of inequality, here we perform experiments involving a networked public goods game in which subjects interact and gain or lose wealth. Subjects (n = 1,462) were randomly assigned to have higher or lower initial endowments, and were embedded within social networks with three levels of economic inequality (Gini coefficient = 0.0, 0.2, and 0.4). In addition, we manipulated the visibility of the wealth of network neighbours. We show that wealth visibility facilitates the downstream consequences of initial inequality—in initially more unequal situations, wealth visibility leads to greater inequality than when wealth is invisible. This result reflects a heterogeneous response to visibility in richer versus poorer subjects. We also find that making wealth visible has adverse welfare consequences, yielding lower levels of overall cooperation, inter-connectedness, and wealth. High initial levels of economic inequality alone, however, have relatively few deleterious welfare effects.

Friday, October 30, 2015

More exercise correlates with younger body cells.

Reynolds points to work by Loprinzi et al. showing physicaly active people have longer telomeres at the end of their chromosomes' DNA strands than sedentary people. (A telomere is a region of repetitive nucleotide sequences at each end of a chromatid, which protects the end of the chromosome from deterioration from from fusion with neighboring chromosomes. It's length is a measure of a cell's biological age because it naturally shortens and frays with age.) Here is their abstract, complete with three (unnecessary) abbreviations, LTL (leukocyte telomere length), PA (physical activity) and MBB (movement based behaviors), that you will have to keep in your short term memory for a few seconds: 

INTRODUCTION: Short leukocyte telomere length (LTL) has become a hallmark characteristic of aging. Some, but not all, evidence suggests that physical activity (PA) may play an important role in attenuating age-related diseases and may provide a protective effect for telomeres. The purpose of this study was to examine the association between PA and LTL in a national sample of US adults from the National Health and Nutrition Examination Survey.  
METHODS: National Health and Nutrition Examination Survey data from 1999 to 2002 (n = 6503; 20-84 yr) were used. Four self-report questions related to movement-based behaviors (MBB) were assessed. The four MBB included whether individuals participated in moderate-intensity PA, vigorous-intensity PA, walking/cycling for transportation, and muscle-strengthening activities. An MBB index variable was created by summing the number of MBB an individual engaged in (range, 0-4).  
RESULTS: A clear dose-response relation was observed between MBB and LTL; across the LTL tertiles, respectively, the mean numbers of MBB were 1.18, 1.44, and 1.54 (Ptrend less than 0.001). After adjustments (including age) and compared with those engaging in 0 MBB, those engaging in 1, 2, 3, and 4 MBB, respectively, had a 3% (P = 0.84), 24% (P = 0.02), 29% (P = 0.04), and 52% (P = 0.004) reduced odds of being in the lowest (vs highest) tertile of LTL; MBB was not associated with being in the middle (vs highest) tertile of LTL.  
CONCLUSIONS: Greater engagement in MBB was associated with reduced odds of being in the lowest LTL tertile.

Thursday, October 29, 2015

Low-power people are more trusting in social exchange.

Schilke et al. make observations suggestion that low-power individuals want high-power people they interact with to be trustworthy, and act according to that desire:
How does lacking vs. possessing power in a social exchange affect people’s trust in their exchange partner? An answer to this question has broad implications for a number of exchange settings in which dependence plays an important role. Here, we report on a series of experiments in which we manipulated participants’ power position in terms of structural dependence and observed their trust perceptions and behaviors. Over a variety of different experimental paradigms and measures, we find that more powerful actors place less trust in others than less powerful actors do. Our results contradict predictions by rational actor models, which assume that low-power individuals are able to anticipate that a more powerful exchange partner will place little value on the relationship with them, thus tends to behave opportunistically, and consequently cannot be trusted. Conversely, our results support predictions by motivated cognition theory, which posits that low-power individuals want their exchange partner to be trustworthy and then act according to that desire. Mediation analyses show that, consistent with the motivated cognition account, having low power increases individuals’ hope and, in turn, their perceptions of their exchange partners’ benevolence, which ultimately leads them to trust.

Wednesday, October 28, 2015

How much sleep do we really need?

A study by Yetish et al. casts fascinating light on the widespread idea that a large fraction of us in modern industrial societies are sleep-deprived, going to bed later than is "natural" and sleeping less than our bodies need. They monitored the sleep patterns of three hunter-gatherer cultures in Bolivia, Tanzania, and South Africa. Here is their summary:

Highlights
•Preindustrial societies in Tanzania, Namibia, and Bolivia show similar sleep parameters
•They do not sleep more than “modern” humans, with average durations of 5.7–7.1 hr
•They go to sleep several hours after sunset and typically awaken before sunrise
•Temperature appears to be a major regulator of human sleep duration and timing
Summary 
How did humans sleep before the modern era? Because the tools to measure sleep under natural conditions were developed long after the invention of the electric devices suspected of delaying and reducing sleep, we investigated sleep in three preindustrial societies. We find that all three show similar sleep organization, suggesting that they express core human sleep patterns, most likely characteristic of pre-modern era Homo sapiens. Sleep periods, the times from onset to offset, averaged 6.9–8.5 hr, with sleep durations of 5.7–7.1 hr, amounts near the low end of those industrial societies. There was a difference of nearly 1 hr between summer and winter sleep. Daily variation in sleep duration was strongly linked to time of onset, rather than offset. None of these groups began sleep near sunset, onset occurring, on average, 3.3 hr after sunset. Awakening was usually before sunrise. The sleep period consistently occurred during the nighttime period of falling environmental temperature, was not interrupted by extended periods of waking, and terminated, with vasoconstriction, near the nadir of daily ambient temperature. The daily cycle of temperature change, largely eliminated from modern sleep environments, may be a potent natural regulator of sleep. Light exposure was maximal in the morning and greatly decreased at noon, indicating that all three groups seek shade at midday and that light activation of the suprachiasmatic nucleus is maximal in the morning. Napping occurred on fewer than 7% of days in winter and fewer than 22% of days in summer. Mimicking aspects of the natural environment might be effective in treating certain modern sleep disorders.

Tuesday, October 27, 2015

Chilling down our religiousity and intolerance with some magnets?

A group of collaborators has used transcranial magnetic stimulation to dial down activity in the area of the posterior medial frontal cortex (pMFC)that evaluates threats and plans responses. A group of subjects who had undergone this procedure expressed less bias against immigrants and also less belief in God than a group that received a sham TMS treatment.
People cleave to ideological convictions with greater intensity in the aftermath of threat. The posterior medial frontal cortex (pMFC) plays a key role in both detecting discrepancies between desired and current conditions and adjusting subsequent behavior to resolve such conflicts. Building on prior literature examining the role of the pMFC in shifts in relatively low-level decision processes, we demonstrate that the pMFC mediates adjustments in adherence to political and religious ideologies. We presented participants with a reminder of death and a critique of their in-group ostensibly written by a member of an out-group, then experimentally decreased both avowed belief in God and out-group derogation by downregulating pMFC activity via transcranial magnetic stimulation. The results provide the first evidence that group prejudice and religious belief are susceptible to targeted neuromodulation, and point to a shared cognitive mechanism underlying concrete and abstract decision processes. We discuss the implications of these findings for further research characterizing the cognitive and affective mechanisms at play.

Monday, October 26, 2015

The hippocampus is essential for recall but not for recognition.

From Patai et al:
Which specific memory functions are dependent on the hippocampus is still debated. The availability of a large cohort of patients who had sustained relatively selective hippocampal damage early in life enabled us to determine which type of mnemonic deficit showed a correlation with extent of hippocampal injury. We assessed our patient cohort on a test that provides measures of recognition and recall that are equated for difficulty and found that the patients' performance on the recall tests correlated significantly with their hippocampal volumes, whereas their performance on the equally difficult recognition tests did not and, indeed, was largely unaffected regardless of extent of hippocampal atrophy. The results provide new evidence in favor of the view that the hippocampus is essential for recall but not for recognition.

Friday, October 23, 2015

Brain activity associated with predicting rewards to others.

Lockwood et al. make the interesting observation that a subregion of the anterior cingulate cortex shows specialization for processing others' versus one's own rewards.


Empathy—the capacity to understand and resonate with the experiences of others—can depend on the ability to predict when others are likely to receive rewards. However, although a plethora of research has examined the neural basis of predictions about the likelihood of receiving rewards ourselves, very little is known about the mechanisms that underpin variability in vicarious reward prediction. Human neuroimaging and nonhuman primate studies suggest that a subregion of the anterior cingulate cortex in the gyrus (ACCg) is engaged when others receive rewards. Does the ACCg show specialization for processing predictions about others' rewards and not one's own and does this specialization vary with empathic abilities? We examined hemodynamic responses in the human brain time-locked to cues that were predictive of a high or low probability of a reward either for the subject themselves or another person. We found that the ACCg robustly signaled the likelihood of a reward being delivered to another. In addition, ACCg response significantly covaried with trait emotion contagion, a necessary foundation for empathizing with other individuals. In individuals high in emotion contagion, the ACCg was specialized for processing others' rewards exclusively, but for those low in emotion contagion, this region also responded to information about the subject's own rewards. Our results are the first to show that the ACCg signals probabilistic predictions about rewards for other people and that the substantial individual variability in the degree to which the ACCg is specialized for processing others' rewards is related to trait empathy.

Thursday, October 22, 2015

Drugs or therapy for depression?

I want to pass on a few clips from a piece by Friedman, summarizing work by Mayberg and collaborators at Emory University, who looked for brain activity that might predict whether a depressed patient would respond better to psychotherapy or antidepressant medication:
Using PET scans, she randomized a group of depressed patients to either 12 weeks of treatment with the S.S.R.I. antidepressant Lexapro or to cognitive behavior therapy, which teaches patients to correct their negative and distorted thinking.
Over all, about 40 percent of the depressed subjects responded to either treatment. But Dr. Mayberg found striking brain differences between patients who did well with Lexapro compared with cognitive behavior therapy, and vice versa. Patients who had low activity in a brain region called the anterior insula measured before treatment responded quite well to C.B.T. but poorly to Lexapro; conversely, those with high activity in this region had an excellent response to Lexapro, but did poorly with C.B.T.
We know that the insula is centrally involved in the capacity for emotional self-awareness, cognitive control and decision making, all of which are impaired by depression. Perhaps cognitive behavior therapy has a more powerful effect than an antidepressant in patients with an underactive insula because it teaches patients to control their emotionally disturbing thoughts in a way that an antidepressant cannot.
These neurobiological differences may also have important implications for treatment, because for most forms of depression, there is little evidence to support one form of treatment over another...Currently, doctors typically prescribe antidepressants on a trial-and-error basis, selecting or adding one antidepressant after another when a patient fails to respond to the first treatment. Rarely does a clinician switch to an empirically proven psychotherapy like cognitive behavior therapy after a patient fails to respond to medication, although these data suggest this might be just the right strategy. One day soon, we may be able to quickly scan a patient with an M.R.I. or PET, check the brain activity “fingerprint” and select an antidepressant or psychotherapy accordingly.
Is the nonspecific nature of talk therapy — feeling understood and cared for by another human being — responsible for its therapeutic effect? Or will specific types of therapy — like C.B.T. or interpersonal or psychodynamic therapy — show distinctly different clinical and neurobiological effects for various psychiatric disorders?...Right now we don’t have a clue, in part because of the current research funding priorities of the National Institute of Mental Health, which strongly favors brain science over psychosocial treatments. But these are important questions, and we owe it to our patients to try to answer them.

Wednesday, October 21, 2015

Hoopla over a bit of rat brain…a complete brain simulation?

A vastly expensive and heavily marketed international collaborative "Blue Brain Project (BBP)" has now reported its first digital reconstruction of a slice of rat somatosensory cortex, the most complete simulation of a piece of excitable brain matter to date (still, a speck of tissue compared to the human brain, which is two million times larger).  I, along with a chorus of critics, can not see how a static depiction and reconstruction of a cortical column (~30,000 neurons, ~40 million synapses) is anything but a waste of money. The biological reality is that those neurons and synapses are not just sitting there, with static components cranking away like the innards of a computer. The wiring is plastic, constantly changing as axons, dendrites, and synapses both grow and retract, changing the number and kind of their connections over which information flows.

Koch and Buice make the generous point that all this might not matter if one could devise the biological equivalent of Alan Turing's Imitation game, seeing if an observer could tell whether output they observe for a given input is being generated by the simulation or by electrical recording from living tissue. Here are some interesting clips from their article in Cell.
...the current BBP model stops with the continuous and deterministic Hodgkin-Huxley currents...And therein lies an important lesson. If the real and the synthetic can’t be distinguished at the level of firing rate activity (even though it is uncontroversial that spiking is caused by the concerted action of tens of thousands of ionic channel proteins), the molecular level of granularity would appear to be irrelevant to explain electrical activity. Teasing out which mechanisms contribute to any specific phenomena is essential to what is meant by understanding.
Markram et al. claim that their results point to the minimal datasets required to model cortex. However, we are not aware of any rigorous argument in the present triptych of manuscripts, specifying the relevant level of granularity. For instance, are active dendrites, such as those of the tall, layer 5 pyramidal cells, essential? Could they be removed without any noticeable effect? Why not replace the continuous, macroscopic, and deterministic HH equations with stochastic Markov models of thousands of tiny channel conductances? Indeed, why not consider quantum mechanical levels of descriptions? Presumably, the latter two avenues have not been chosen because of their computational burden and the intuition that they are unlikely to be relevant. The Imitation Game offers a principled way of addressing these important questions: only add a mechanism if its impact on a specific set of measurables can be assessed by a trained observer.
Consider the problem of numerical weather prediction and climate modeling, tasks whose physico-chemical and computational complexity is comparable to whole-brain modeling. Planet-wide simulations that cover timescales from hours to decades require a deep understanding of how physical systems interact across multiple scales and careful choices about the scale at which different phenomena are modeled. This has led to an impressive increase in predictive power since 1950, when the first such computer calculations were performed. Of course, a key difference between weather prediction and whole-brain simulation is that the former has a very specific and quantifiable scientific question (to wit: “is it going to rain tomorrow?”). The BBP has created an impressive initial scaffold that will facilitate asking these kinds of questions for brains.

Tuesday, October 20, 2015

Meditation madness

Adam Grant does a NYTimes Op-Ed piece that mirrors some of my own sentiments about the current meditation craze. There would seem to be almost nothing that practicing meditation doesn't enhance (ingrown toenails?) I'm fascinated by what studies on meditation have told us about how the mind works, and MindBlog has done many posts on the topic (click the meditation link under 'selected blog categories' in the right column.) I and many others personally find it very useful in maintaining a calm and focused mind.  But.... it is not a universal panacea, and many of its effects can be accomplished, as Grant points out, by other means. (By the way, a Wisconsin colleague of mine who has assisted in a number of the meditation studies conducted by Richard Davidson and collaborators at the University of Wisconsin feels that people who engage meditation regimes display more depressive behaviors after a period of time.) Some clips from Grant's screed:
...Every benefit of the practice can be gained through other activities...This is the conclusion from an analysis of 47 trials of meditation programs, published last year in JAMA Internal Medicine: “We found no evidence that meditation programs were better than any active treatment (i.e., drugs, exercise and other behavioral therapies).”
O.K., so meditation is just one of many ways to fight stress. But there’s another major benefit of meditating: It makes you mindful. After meditating, people are more likely to focus their attention in the present. But as the neuroscientist Richard Davidson and the psychologist Alfred Kaszniak recently lamented, “There are still very few methodologically rigorous studies that demonstrate the efficacy of mindfulness-based interventions in either the treatment of specific diseases or in the promotion of well-being.”
And guess what? You don’t need to meditate to achieve mindfulness either...you can become more mindful by thinking in conditionals instead of absolutes...Change “is” to “could be,” and you become more mindful. The same is true when you look for an answer rather than the answer.
(I would also point out that 'mindfulness' can frequently be generated by switching in your thoughts from a first to a third person perspective.) Finally:
...in some situations, meditation may be harmful: Willoughby Britton, a Brown University Medical School professor, has discovered numerous cases of traumatic meditation experiences that intensify anxiety, reduce focus and drive, and leave people feeling incapacitated.

Monday, October 19, 2015

A brain switch that can make the familiar seem new?

We all face the issue how to refresh and renew our energy and perspective after our brains have adapted, habituated, or densensitized to an ongoing interest or activity that lost its novelty. As I engage my long term interests in piano performance and studying how our minds work, I wish I could throw a "reset" switch in my brain that would let me approach the material as if it were new again. Ho et al. appear to have found such a switch, in the perirhinal cortex of rats, that regulates whether images are perceived as familiar or novel:
Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally.
Unfortunately, given that rather fancy optogenetic methods were used to vary oscillatory activity in the perirhinal cortex, no human applications of this work are imminent.

Sunday, October 18, 2015

Sir Reginald's Marvellous Organ

Under the "random curious stuff" category noted in MindBlog's title, above, I can't resist passing on this naughty video sent by a friend...apologies to sensitive readers who only want the brain stuff.


Friday, October 16, 2015

Great apes can look ahead in time

Yet another supposed distinction between human and animal minds has bit the dust. The prevailing dogma (expressed in my talk "The Beast Within") has been that animals don't anticipate the future. Now Kano and Hirata show that chimpanzees remember a movie they viewed a day earlier, because when the movie is shown again their eyes move to a part of the screen where an action is about to happen that is relevant to the storyline.

Highlights

•We developed a novel eye-tracking task to examine great apes’ memory skills
•Apes watched the same videos twice across 2 days, with a 24-hr delay
•Apes made anticipatory looks based on where-what information on the second day
•Apes thus encoded ongoing events into long-term memory by single experiences

Summary

Everyday life poses a continuous challenge for individuals to encode ongoing events, retrieve past events, and predict impending events. Attention and eye movements reflect such online cognitive and memory processes, especially through “anticipatory looks”. Previous studies have demonstrated the ability of nonhuman animals to retrieve detailed information about single events that happened in the distant past. However, no study has tested whether nonhuman animals employ online memory processes, in which they encode ongoing movie-like events into long-term storage during single viewing experiences. Here, we developed a novel eye-tracking task to examine great apes’ anticipatory looks to the events that they had encountered one time 24 hr earlier. Half-minute movie clips depicted novel and potentially alarming situations to the participant apes (six bonobos, six chimpanzees). In the experiment 1 clip, an aggressive ape-like character came out from one of two identical doors. While viewing the same movie again, apes anticipatorily looked at the door where the character would show up. In the experiment 2 clip, the human actor grabbed one of two objects and attacked the character with it. While viewing the same movie again but with object-location switched, apes anticipatorily looked at the object that the human would use, rather than the former location of the object. Our results thus show that great apes, just by watching the events once, encoded particular information (location and content) into long-term memory and later retrieved that information at a particular time in anticipation of the impending events.