There have been many attempts to discuss the evolutionary origins of music. We review theories of music origins and take the perspective that music is originally derived from emotional signals in both humans and animals. An evolutionary approach has two components: First, is music adaptive? How does it improve reproductive success? Second, what, if any, are the phylogenetic origins of music? Can we find evidence of music in other species? We show that music has adaptive value through emotional contagion, social cohesion, and improved well-being. We trace the roots of music through the emotional signals of other species suggesting that the emotional aspects of music have a long evolutionary history. We show how music and speech are closely interlinked with the musical aspects of speech conveying emotional information. We describe acoustic structures that communicate emotion in music and present evidence that these emotional features are widespread among humans and also function to induce emotions in animals. Similar acoustic structures are present in the emotional signals of nonhuman animals. We conclude with a discussion of music designed specifically to induce emotional states in animals, both cotton top tamarin monkeys and domestic cats.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Thursday, December 03, 2015
The evolution of music from emotional signals
I want to pass on the slightly edited abstract of a recent article on the evolutionary origins of music, "Music evolution and neuroscience," in Progress in Brain Research, written by my Univ. of Wisconsin colleague Charles Snowdon.
Blog Categories:
evolution/debate,
evolutionary psychology,
human evolution,
music
Wednesday, December 02, 2015
Increased false-memory susceptibility after mindfulness meditation
From Wilson et al.:
The effect of mindfulness meditation on false-memory susceptibility was examined in three experiments. Because mindfulness meditation encourages judgment-free thoughts and feelings, we predicted that participants in the mindfulness condition would be especially likely to form false memories. In two experiments, participants were randomly assigned to either a mindfulness induction, in which they were instructed to focus attention on their breathing, or a mind-wandering induction, in which they were instructed to think about whatever came to mind. The overall number of words from the Deese-Roediger-McDermott paradigm that were correctly recalled did not differ between conditions. However, participants in the mindfulness condition were significantly more likely to report critical nonstudied items than participants in the control condition. In a third experiment, which tested recognition and used a reality-monitoring paradigm, participants had reduced reality-monitoring accuracy after completing the mindfulness induction. These results demonstrate a potential unintended consequence of mindfulness meditation in which memories become less reliable.
Blog Categories:
meditation,
memory/learning,
mindfulness
Tuesday, December 01, 2015
Religiousness decreases children’s altruistic behaviors.
Decety et al. challenge the view that religiosity facilitates prosocial behavior.:
Highlights
Highlights
•Family religious identification decreases children’s altruistic behaviorsSummary
•Religiousness predicts parent-reported child sensitivity to injustices and empathy
•Children from religious households are harsher in their punitive tendencies
Prosocial behaviors are ubiquitous across societies. They emerge early in ontogeny and are shaped by interactions between genes and culture. Over the course of middle childhood, sharing approaches equality in distribution. Since 5.8 billion humans, representing 84% of the worldwide population, identify as religious, religion is arguably one prevalent facet of culture that influences the development and expression of prosociality. While it is generally accepted that religion contours people’s moral judgments and prosocial behavior, the relation between religiosity and morality is a contentious one. Here, we assessed altruism and third-party evaluation of scenarios depicting interpersonal harm in 1,170 children aged between 5 and 12 years in six countries (Canada, China, Jordan, Turkey, USA, and South Africa), the religiousness of their household, and parent-reported child empathy and sensitivity to justice. Across all countries, parents in religious households reported that their children expressed more empathy and sensitivity for justice in everyday life than non-religious parents. However, religiousness was inversely predictive of children’s altruism and positively correlated with their punitive tendencies. Together these results reveal the similarity across countries in how religion negatively influences children’s altruism, challenging the view that religiosity facilitates prosocial behavior.
Blog Categories:
culture/politics,
human development,
religion
Monday, November 30, 2015
The effects of birth order on personality.
Rohrer et. al. issue a new installment on the perennial question of how our birth order influences us, with a study showing higher intelligence in firstborns, but no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination.:
Significance
Significance
The question of whether a person’s position among siblings has a lasting impact on that person’s life course has fascinated both the scientific community and the general public for >100 years. By combining large datasets from three national panels, we confirmed the effect that firstborns score higher on objectively measured intelligence and additionally found a similar effect on self-reported intellect. However, we found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. This finding contradicts lay beliefs and prominent scientific theories alike and indicates that the development of personality is less determined by the role within the family of origin than previously thought.Abstract
This study examined the long-standing question of whether a person’s position among siblings has a lasting impact on that person’s life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (between-family design). In our analyses, we confirmed the expected birth-order effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain.
Friday, November 27, 2015
A picture show; and, Alzheimer's and the innate immune system
Our nervous and immune systems interact with each other at the same time they both interact with our environment. Cell magazine has put together a picture show that illustrates the beauty and complexity of these interactions. It accompanies special issues of Trends in Neuroscience and Trends in Immunology that deal with neuroimmunology in disease and in normal aging. Of special interest is a description of how releasing an inhibition of the innate immune system can allow phagocytes to clear the Aβ/β-amyloid of Alzheimer's disease from the brain.
Thursday, November 26, 2015
Online political communication: more than an echo chamber?
From Barbera et al.:
We estimated ideological preferences of 3.8 million Twitter users and, using a data set of nearly 150 million tweets concerning 12 political and nonpolitical issues, explored whether online communication resembles an “echo chamber” (as a result of selective exposure and ideological segregation) or a “national conversation.” We observed that information was exchanged primarily among individuals with similar ideological preferences in the case of political issues (e.g., 2012 presidential election, 2013 government shutdown) but not many other current events (e.g., 2013 Boston Marathon bombing, 2014 Super Bowl). Discussion of the Newtown shootings in 2012 reflected a dynamic process, beginning as a national conversation before transforming into a polarized exchange. With respect to both political and nonpolitical issues, liberals were more likely than conservatives to engage in cross-ideological dissemination; this is an important asymmetry with respect to the structure of communication that is consistent with psychological theory and research bearing on ideological differences in epistemic, existential, and relational motivation. Overall, we conclude that previous work may have overestimated the degree of ideological segregation in social-media usage.
Wednesday, November 25, 2015
Choosing to be grateful.
In a piece timed for Thanksgiving, Arthur Brooks does a nice job of fetching up and giving links to references to a number of interesting studies on the positive effects of gratitude on well-being. Arthur Brooks is a person who my knee-jerk liberal reflexes would dictate be discounted immediately, because he is head of the conservative American Enterprise Institute. However this former academic is one clever dude (Here is an Op-Ed piece on abundance. Other commentaries are found here.)
Musical expertise modulates the brain’s entrainment to music.
Yet another study, by Doelling and Poeppel, showing effects of musical training on the brain and supporting a role for cortical oscillatory activity in music perception and cognition.:
Significance
Significance
We demonstrate that cortical oscillatory activity in both low (less than 8 Hz) and high (15–30 Hz) frequencies is tightly coupled to behavioral performance in musical listening, in a bidirectional manner. In light of previous work on speech, we propose a framework in which the brain exploits the temporal regularities in music to accurately parse individual notes from the sound stream using lower frequencies (entrainment) and in higher frequencies to generate temporal and content-based predictions of subsequent note events associated with predictive models.Abstract
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (less than 8 Hz; delta–theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15–30 Hz)—often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition.
Blog Categories:
brain plasticity,
human development,
music
Tuesday, November 24, 2015
Our bodies can sabotage our healthy behaviors..
I pass on an interesting chunk from Reynolds' review of work by Mansoubi et al. showing that people who use sit-to-stand workstations in their office compensate by reducing activity and increasing sitting outside of working hours, thus canceling out the effects of their virtuous exercise at the office.
...the human body and brain are funny. They often, and rather insidiously, undermine some of our best efforts to be healthier, in an attempt to maintain our physiological status quo. The result can be that we do not benefit as much as we’d hoped from changes to our lifestyles. When we slash calories to lose weight, for instance, our bodies often lower our metabolic rate, and our weight doesn’t budge much.
Similarly, studies of people who begin or greatly intensify an exercise program have shown that these exercisers often start sitting more during the hours when they are not working out, so that their overall daily energy expenditure doesn’t increase substantially and the number of hours that they spend sitting grows.
Monday, November 23, 2015
Wielding power increases testosterone in women.
Anders et al. provide evidence for a gender→testosterone pathway:
Significance
Significance
Human biology is typically studied within the framework of sex (evolved, innate factors) rather than gender (sociocultural factors), despite some attention to nature/nurture interactions. Testosterone is an exemplar of biology studied as natural difference: men’s higher testosterone is typically seen as an innate “sex” difference. However, our experiment demonstrates that gender-related social factors also matter, even for biological measures. Gender socialization may affect testosterone by encouraging men but not women toward behaviors that increase testosterone. This shows that research on human sex biology needs to account for gender socialization and that nurture, as well as nature, is salient to hormone physiology. Our paper provides a demonstration of a novel gender→testosterone pathway, opening up new avenues for studying gender biology.Abstract
Testosterone is typically understood to contribute to maleness and masculinity, although it also responds to behaviors such as competition. Competition is crucial to evolution and may increase testosterone but also is selectively discouraged for women and encouraged for men via gender norms. We conducted an experiment to test how gender norms might modulate testosterone as mediated by two possible gender→testosterone pathways. Using a novel experimental design, participants (trained actors) performed a specific type of competition (wielding power) in stereotypically masculine vs. feminine ways. We hypothesized in H1 (stereotyped behavior) that wielding power increases testosterone regardless of how it is performed, vs. H2 (stereotyped performance), that wielding power performed in masculine but not feminine ways increases testosterone. We found that wielding power increased testosterone in women compared with a control, regardless of whether it was performed in gender-stereotyped masculine or feminine ways. Results supported H1 over H2: stereotyped behavior but not performance modulated testosterone. These results also supported theory that competition modulates testosterone over masculinity. Our findings thus support a gender→testosterone pathway mediated by competitive behavior. Accordingly, cultural pushes for men to wield power and women to avoid doing so may partially explain, in addition to heritable factors, why testosterone levels tend to be higher in men than in women: A lifetime of gender socialization could contribute to “sex differences” in testosterone. Our experiment opens up new questions of gender→testosterone pathways, highlighting the potential of examining nature/nurture interactions and effects of socialization on human biology.
Blog Categories:
acting/choosing,
culture/politics,
sex
Friday, November 20, 2015
Flip-Flops in medical advice.
I want to forward readers some clips I've taken from a review by Zuger of a recent book "Ending Medical Reversal" by Prasad and Cifu. After glancing through the following you might want to also have a look a this article by Span on the over-treatment of older patients.
Prasad and Cifu... have set themselves the task of figuring out how often modern medicine reverses itself, analyzing why it happens, and suggesting ways to make it stop...[they] extrapolate from past reversals to conclude that about 40 percent of what we consider state-of-the-art health care is likely to turn out to be unhelpful or actually harmful.
Recent official flip-flops include habits of treating everything from lead poisoning to blood clots, from kidney stones to heart attacks. One reversal concerned an extremely common orthopedic procedure, the surgical repair of the meniscus in the knee, which turns out to be no more effective than physical therapy alone. The interested reader can plow through almost 150 disproved treatments in the book’s appendix.
What could make more sense, after all, than finding some cancers early, fixing a piece of torn cartilage, closing a hole in the heart, and propping open blood vessels that have become perilously narrow? And yet not one of these helpful interventions has been shown to make a difference in the health or survival of patients who obediently line up to have them done.
“Often the study of the study of how therapies should work is much more extensive and comes before the study of whether therapies do work,” the authors write. Thus a medical culture based on “should work” rather than “does work” is condemned to constantly correct itself when the science is finally evaluated for outcomes that matter.
To fix this constant backtracking would require nothing less than a revolution in how doctors are trained, with an emphasis on the proven and practical rather than the theoretical. (It would also require a second revolution in how doctors practice, with less prestige and remuneration for coming up with new ideas and more for validating old ones.)
Blog Categories:
aging,
culture/politics,
evolutionary psychology,
technology
Thursday, November 19, 2015
Divided we fall - putting social progress on par with prosperity
Laura Levis, in Harvard Magazine, describes work of Porter and Stern, who have developed a social progress index that:
...ranks 133 countries on multiple dimensions of social and environmental performance in three main categories: Basic Human Needs (food, water, shelter, safety); Foundations of Wellbeing (basic education, information, health, and a sustainable environment); and Opportunity (freedom of choice, freedom from discrimination, and access to higher education). Porter considers the index “the most comprehensive framework developed for measuring social progress, and the first to measure social progress independently of gross domestic product (GDP)."
The United States may rank sixth among countries in terms of GDP per capita, but its results on the Social Progress Index are lackluster. It is sixteenth overall in social progress: well below Canada, the United Kingdom, Germany, and Japan in several key areas, including citizens’ quality of life and provision of basic human needs. The nation ranks thirtieth in personal safety, forty-fifth in access to basic knowledge, sixty-eighth on health and wellness, and seventy-fourth in ecosystem sustainability. “We had a lot of firsts in social progress over the years in America,” Porter points out, “but we kind of lost our rhythm and our momentum.”
About 20 or 30 years ago, for reasons Porter says he cannot completely explain, the rate of progress in America began to slow down. As a society, he points out, Americans slowly became more divided, and important priorities such as healthcare, education, and politics suffered. “We had gridlock, whether it’s unions or whether it’s ideological differences, and—although we’ve made some big steps in certain areas of human rights like gay rights—if you think about the really core things like our education system and our health system, we’re just not moving,” he says. “I think our political system isn’t helping, because we’re all about political gains and blocking the other guy, rather than compromising and getting things done.”
Wednesday, November 18, 2015
A personal note, the Steinway B now in Fort Lauderdale - some Chopin
After a few tense moments, my Steinway B is now moved from Wisconsin to Florida.
I've upgraded my video and audio recording equipment, finally got the bugs out of the process, and thought I would pass on my first test recording - of a Chopin Nocturne that I plan to play at a recital next February here in Fort Lauderdale. The vers. 3 refers to the fact that this is the third recording of this piece that I have put on my YouTube channel.
I've upgraded my video and audio recording equipment, finally got the bugs out of the process, and thought I would pass on my first test recording - of a Chopin Nocturne that I plan to play at a recital next February here in Fort Lauderdale. The vers. 3 refers to the fact that this is the third recording of this piece that I have put on my YouTube channel.
Neuropolitics - reading the electorate's mind.
Members of our two major political parties increasingly seem to inhabit alternative realities that are utterly incomprehending of each other. How about reinforcing these bubbles with technology for feeding blocks of voters only what they want to hear? A NYTimes piece by Kevin Randall describes some really spooky new political tools: digital campaign signs that note the facial and emotional reactions of those watching their message and tally emotional reactions like happiness, surprise, anger, disgust, fear and sadness. This permits alteration of the message to elicit desired responses. Such devices have been used in Mexico, Poland, Turkey, and probably the U.S.
In Mexico, President Enrique Peña Nieto’s campaign and his party, the Institutional Revolutionary Party, or PRI, employed tools to measure voters’ brain waves, skin arousal, heart rates and facial expressions during the 2012 presidential campaign. More recently, the party has been using facial coding to help pick its best candidates, one consultant says. Some officials even speak openly about their embrace of neuropolitical techniques, and not just for campaigning, but for governing as well.
Neuromarketing consultants say they are conducting research like this in more than a dozen countries, including Argentina, Brazil, Costa Rica, El Salvador, Russia, Spain and, to a much lesser extent, the United States.
One neuromarketing firm says it has worked for a Hillary Rodham Clinton presidential campaign committee to help it improve its targeting and messages.
David Plouffe, President Obama’s former campaign manager, said the tools “would be new ground for political campaigns...The richness of this data compared to what is gathered today in testing ads or evaluating speeches and debates, which is the trusty old dial test and primitive qualitative methods, is hard to comprehend. It gets more to emotion, intensity and a more complex understanding of how people are reacting.”Added note: Mexico's governing party, the PRI, has now said it will no longer employ the techniques described above.
Blog Categories:
culture/politics,
social cognition,
technology
Tuesday, November 17, 2015
The Brain with David Eagleman
I want to point MindBlog readers who aren't already aware of the David Eagleman PBS series on the brain to its description on the PBS website. The episodes can be viewed on mobile devices, in your web browser, etc. I found episode 4 "Why Do I Need You?," on our social brains, to be a very compelling one.
More evolution cartoons
I pass this on from a recent seminar presentation to the Chaos group at the Univ. of Wisconsin... There must be hundreds of cartoons that take a different tack on this sequence:
Monday, November 16, 2015
Good and bad stress in the Brain - The inverted U
I want to pass on a bit of commentary by Robert Sapolsky, in a special issue of Nature Neuroscience that focuses on stress, that presents a clear and lucid description of "good stress" and "bad stress."
...to a large extent, the effects of stress in the brain form a nonlinear 'inverted-U' dose-response curve as a function of stressor severity: the transition from the complete absence of stress to mild stress causes an increase in endpoint X, the transition from mild-to-moderate stress causes endpoint X to plateau and the transition from moderate to more severe stress decreases endpoint X.
A classic example of the inverted-U is seen with the endpoint of synaptic plasticity in the hippocampus, where mild-to-moderate stressors, or exposure to glucocorticoid concentrations in the range evoked by such stressors, enhances primed burst potentiation, whereas more severe stressors or equivalent elevations of glucocorticoid concentrations do the opposite11. This example also demonstrates an elegant mechanism for generating such an inverted-U12. Specifically, the hippocampus contains ample quantities of receptors for glucocorticoids. These come in two classes. First, there are the high-affinity low-capacity mineralocorticoid receptors (MRs), which are mostly occupied under basal, non-stress conditions and in which occupancy increases to saturating levels with mild-to-moderate stressors. In contrast, there are the low-affinity, high-capacity glucocorticoid receptors (GRs), which are not substantially occupied until there is major stress-induced glucocorticoid secretion. Critically, it is increased MR occupancy that enhances synaptic plasticity, whereas increased occupancy of GRs impairs it; the inverted-U pattern emerges from these opposing effects.
..in general, the effects of mild-to-moderate stress (that is, the left side of the U) are salutary, whereas those of severe stress are the opposite. In other words, it is not the case that stress is bad for you. It is major stress that is bad for you, whereas mild stress is anything but; when it is the optimal amount of stress, we love it. What constitutes optimal good stress? It occurs in a setting that feels safe; we voluntarily ride a roller coaster knowing that we are risking feeling a bit queasy, but not risking being decapitated. Moreover, good stress is transient; it is not by chance that a roller coaster ride is not 3 days long. And what is mild, transient stress in a benevolent setting? For this we have a variety of terms: arousal, alertness, engagement, play and stimulation (Fig. 1). The upswing of the inverted-U is the domain of any good educator who intuits the ideal space between a student being bored and being overwhelmed, where challenge is energized by a well-calibrated motivating sense of 'maybe'; after all, it is in the realm of plausible, but not guaranteed, reward that anticipatory bursts of mesolimbic dopamine release are the greatest19. And the downswing of the inverted-U is, of course, the universe of “stress is bad for you”. Thus, the ultimate goal of those studying stress is not to 'cure' us of it, but to optimize it.
Figure 1: Conceptualization of the inverted-U in the context of the benefits and costs of stress.
A broad array of neurobiological endpoints show the same property, which is that stress in the mild-to-moderate range (roughly corresponding to 10–20 μg dl−1 of corticosterone, the species-specific glucocorticoid of rats and mice) has beneficial, salutary effects; subjectively, when exposure is transient, we typically experience this range as being stimulatory. In contrast, both the complete absence of stress, or stress that is more severe and/or prolonged than that in the stimulatory range, have deleterious effects on those same neurobiological endpoints. The absence of stress is subjectively experienced as understimulatory by most, whereas the excess is typically experienced as overstimulatory, which segues into 'stressful'. Many of the inverted-U effects of stress in the brain are explained by the dual receptor system for glucocorticoids, where salutary effects are heavily mediated by increasing occupancy of the high-affinity, low-capacity MRs and deleterious effects are mediated by the low-affinity, high-capacity GRs.
Saturday, November 14, 2015
Shift happens
I'm passing on this interesting and scary 2014 video about our future, sent to me by a friend.
Friday, November 13, 2015
How to live what we don't believe.
Veteran readers of MindBlog will be aware that a continuing issue has been the problem of what to do with our understanding of how our brains really work - the fact that there is no free will, morality, or "I" of the sort we commonly suppose. (See for example "The I-Illusion," "Having no self..," "Are we really conscious.")
Two recent Op-Ed pieces in the NYTimes continue this thread: Risen and Nussbaum on "Believing What You Don't Believe" and William Irwin on "How to Live a Lie." Irwin considers morality, religion, and finally, free will:
Two recent Op-Ed pieces in the NYTimes continue this thread: Risen and Nussbaum on "Believing What You Don't Believe" and William Irwin on "How to Live a Lie." Irwin considers morality, religion, and finally, free will:
When a novel or movie is particularly engrossing, our reactions to it may be involuntary and resistant to our attempts to counter them. We form what the philosopher Tamar Szabo Gendler calls aliefs — automatic belief-like attitudes that contrast with our well considered beliefs.
Like our involuntary screams in the theater, there may be cases of involuntary moral fictionalism or religious fictionalism as well. Among philosophical issues, though, free will seems to be the clearest case of involuntary fictionalism. It seems clear that I have free will when, for example, I choose from many options to order pasta at a restaurant. Yet few, if any, philosophical notions are harder to defend than free will. Even dualists, who believe in a nonmaterial soul, run into problems with divine foreknowledge. If God foresaw that I would order pasta, then was I really free to do otherwise, to order steak?
In the traditional sense, having free will means that multiple options are truly available to me. I am not a computer, running a decision-making program. No matter what I choose, I could have chosen otherwise. However, in a materialist, as opposed to dualist, worldview, there is no place in the causal chain of material things for the will to act in an uncaused way. Thus only one outcome of my decision-making process is possible. Not even quantum indeterminacy could give me the freedom to order steak. The moment after I recognize this, however, I go back to feeling as if my decision to order pasta was free and that my future decision of what to have for dessert will also be free. I am a free will fictionalist. I accept that I have free will even though I do not believe it.
Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.
Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.
William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.
Blog Categories:
consciousness,
self,
social cognition
Thursday, November 12, 2015
Amazing…. Robots learn coordinated behavior from scratch.
Der and Martius suggest that a novel plasticity rule can explain the development of sensorimotor intelligence, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system. This seems to me to be groundbreaking and fascinating work. I pass on their overview video, and then some context from their introduction, which I recommend that you read. Here is their abstract. (I don't even begin to understand the description of their feed-forward controller network and humanoid robot, which follows a “chaining together what changes together” rule. I can send motivated readers a PDF of the whole article with technical details and equations.)
Research in neuroscience produces an understanding of the brain on many different levels. At the smallest scale, there is enormous progress in understanding mechanisms of neural signal transmission and processing. At the other end, neuroimaging and related techniques enable the creation of a global understanding of the brain’s functional organization. However, a gap remains in binding these results together, which leaves open the question of how all these complex mechanisms interact. This paper advocates for the role of self-organization in bridging this gap. We focus on the functionality of neural circuits acquired during individual development by processes of self-organization—making complex global behavior emerge from simple local rules.
Donald Hebb’s formula “cells that fire together wire together” may be seen as an early example of such a simple local rule which has proven successful in building associative memories and perceptual functions. However, Hebb’s law and its successors...are restricted to scenarios where the learning is driven passively by an externally generated data stream. However, from the perspective of an autonomous agent, sensory input is mainly determined by its own actions. The challenge of behavioral self-organization requires a new kind of learning that bootstraps novel behavior out of the self-generated past experiences.
This paper introduces a rule which may be expressed as “chaining together what changes together.” This rule takes into account temporal structure and establishes contact to the external world by directly relating the behavioral level to the synaptic dynamics. These features together provide a mechanism for bootstrapping behavioral patterns from scratch.
This synaptic mechanism is neurobiologically plausible and raises the question of whether it is present in living beings. This paper aims to encourage such initiatives by using bioinspired robots as a methodological tool. Admittedly, there is a large gap between biological beings and such robots. However, in the last decade, robotics has seen a change of paradigm from classical AI thinking to embodied AI which recognizes the role of embedding the specific body in its environment. This has moved robotics closer to biological systems and supports their use as a testbed for neuroscientific hypotheses.
We deepen this argument by presenting concrete results showing that the proposed synaptic plasticity rule generates a large number of phenomena which are important for neuroscience. We show that up to the level of sensorimotor contingencies, self-determined behavioral development can be grounded in synaptic dynamics, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system. This is achieved with a very simple neuronal control structure by outsourcing much of the complexity to the embodiment [the idea of morphological computation].
Wednesday, November 11, 2015
Trusting robots, but not androids
Gilbert Chin points to work by Mathur and Reichling in the Journal Cognition.
Highlights
Abstract
Robots collect warehoused books, weld car parts together, and vacuum floors. As the number of android robots increases, however, concerns about the “uncanny valley” phenomenon—that people dislike a vaguely human-like robot more than either a machine-like robot or a real human—remain. Mathur and Reichling revisited whether human reactions to android robots exhibit an uncanny valley effect, using a set of 80 robot head shots gathered from the Internet and a systematically morphed set of six images extending from entirely robot to entirely human. Humans did adhere to the uncanny valley curve when rating the likeability of both sets of faces; what's more, this curve also described the extent to which those faces were trusted.Here's the summary from the paper:
Highlights
• Likability ratings of a large sample of real robot faces had a robust Uncanny Valley.
• Digitally composed robot face series demonstrated a similar Uncanny Valley.
• The Uncanny Valley may subtly alter humans’ trusting behavior toward robot partners.
• Category confusion may occur in the Uncanny Valley but did not mediate the effect.
Abstract
Android robots are entering human social life. However, human–robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects’ explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An “investment game” showed that the UV penetrated even more deeply to influence subjects’ implicit decisions concerning robots’ social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human–robot social interaction, robust UV effects pose a formidable android-specific problem.
Tuesday, November 10, 2015
The unknowns of cognitive enhancement
Martha Farah points out how little is known about current methods of cognitive enhancement, and suggests several reasons why we are so ignorant. A few clips from her article:
...stimulants such as amphetamine and methylphenidate (sold under trade names such as Adderall and Ritalin, respectively) are widely used for nonmedical reasons …cognitive enhancement with stimulants is commonplace on college campuses…use by college faculty and other professionals to enhance workplace productivity has been documented…The published literature includes substantially different estimates of the effectiveness of prescription stimulants as cognitive enhancers. A recent meta-analysis suggests that the effect is most likely real but small for executive function tests stressing inhibitory control, and probably nonexistent for executive function tests stressing working memory.Farah notes several studies suggesting that the effects of Adderall and another drug, modafinil (trade name Provigil) on ‘cognitive enhancement’ are actually effects on task motivation and mood.
The newest trend in cognitive enhancement is the use of transcranial electric stimulation. In the most widely used form, called transcranial direct current stimulation (tDCS), a weak current flows between an anode and a cathode placed on the head, altering the resting potential of neurons in the current's path….Transcranial electric stimulation is expanding …with new companies selling compact, visually appealing, user-friendly devices…published literature includes a mix of findings. One recent attempt to synthesize the literature with meta-analysis concluded that tDCS has no effect whatsoever on a wide range of cognitive abilities.
Why are we so ignorant about cognitive enhancement? Several factors seem to be at play. The majority of studies on enhancement effectiveness have been carried out on small samples, rarely more than 50 subjects, which limits their power. Furthermore, cognitive tasks typically lend themselves to a variety of different but reasonable outcome measures, such as overall errors, specific types of errors (for example, false alarms), and response times. In addition, there is usually more than one possible statistical approach to analyze the enhancement effect. Small samples and flexibility in design and analysis raise the likelihood of published false positives. In addition, pharmacologic and electric enhancements may differ in effectiveness depending on the biological and psychological traits of the user, which complicates the effort to understand the true enhancement potential of these technologies. Industry is understandably unmotivated to take on the expense of appropriate large-scale trials of enhancement, given that the stimulants used are illegally diverted and transcranial electric stimulation devices can be sold without such evidence. The inferential step from laboratory effect to real-world benefit adds another layer of challenge. Given that enhancements would likely be used for years, long-term effectiveness and safety are essential concerns but are particularly difficult and costly to determine. As a result, the only large-scale trial we may see is the enormous but uncontrolled and poorly monitored trial of people using these drugs and devices on their own.
Blog Categories:
attention/perception,
brain plasticity
Monday, November 09, 2015
Can we really change our aging?
I thought I would point MindBlog readers to a brief talk I gave, "Can we really change our aging?," at the Nov. 1, 2015 meeting of the Fort Lauderdale Prime Timers, and a Nov. 7 Lunch and Learn session of SAGE South Florida. It distills the contents of about 250 MindBlog posts I’ve written describing research on aging, and passes on some of the facts I think are most striking.
Friday, November 06, 2015
Critical period for visual pathway formation? - another dogma bites the dust.
India, which may have the largest number of blind children in the world, with estimates ranging from 360,000 to nearly 1.2 million, is providing a vast laboratory that has overturned one of the central dogmas of brain development - that development of visual (and other) pathways must take place within a critical time window, after which formation of proper connections becomes much more difficult or impossible. Until recently, children over 8 years old with congenital cataracts were not considered appropriate subjects for lens replacement surgery. In Science Magazine Rhitu Chatterjee describes a project begun in 2004, Led by neuroscientist Pawan Sinha, that has restored sight to much older children. The story of one 18-year old is followed, who over the 18 months following lens replacement begin to see with clarity that permitted him to bike through a crowded marketplace.
Of the nearly 500 children and young adults that have undergone cataract operation, about half became research subjects. One fascinating result that emerged is that visual experience isn't critical for certain visual function, the brain seems to be prewired, for example, to be fooled by some visual illusions that were thought to be a product of learning. One is the Ponzo illusion, which typically involves lines converging on the horizon (like train tracks) and two short parallel lines cutting across them. Although the horizontal lines are identical, the one nearer the horizon looks longer. If the Ponzo illusion were the result of visual learning, newly sighted kids wouldn't fall for it. But in fact, children who had just had their vision restored were just as susceptible to the Ponzo illusion as were control subjects with normal vision. The kids also fell for the Müller-Lyer illusion, a pair of lines with arrowheads on both ends; one set of arrowheads points outward, the other inward toward the line. The line with the inward arrowheads seems longer. These results lead Sinha to suggest that the illusion is being driven by very simple factors in the image that the brain is probably innately programmed to respond to.
Of the nearly 500 children and young adults that have undergone cataract operation, about half became research subjects. One fascinating result that emerged is that visual experience isn't critical for certain visual function, the brain seems to be prewired, for example, to be fooled by some visual illusions that were thought to be a product of learning. One is the Ponzo illusion, which typically involves lines converging on the horizon (like train tracks) and two short parallel lines cutting across them. Although the horizontal lines are identical, the one nearer the horizon looks longer. If the Ponzo illusion were the result of visual learning, newly sighted kids wouldn't fall for it. But in fact, children who had just had their vision restored were just as susceptible to the Ponzo illusion as were control subjects with normal vision. The kids also fell for the Müller-Lyer illusion, a pair of lines with arrowheads on both ends; one set of arrowheads points outward, the other inward toward the line. The line with the inward arrowheads seems longer. These results lead Sinha to suggest that the illusion is being driven by very simple factors in the image that the brain is probably innately programmed to respond to.
Blog Categories:
brain plasticity,
human development,
vision
Thursday, November 05, 2015
A biomarker for early detection of dementia
Kunz et al. show that in an at-risk group for developing Alzheimer's different brain signals are detected many decades before onset of the disease. Individuals showing this change would be candidates for starting therapy at early stages of the disease.
Alzheimer’s disease (AD) manifests with memory loss and spatial disorientation. AD pathology starts in the entorhinal cortex, making it likely that local neural correlates of spatial navigation, particularly grid cells, are impaired. Grid-cell–like representations in humans can be measured using functional magnetic resonance imaging. We found that young adults at genetic risk for AD (APOE-ε4 carriers) exhibit reduced grid-cell–like representations and altered navigational behavior in a virtual arena. Both changes were associated with impaired spatial memory performance. Reduced grid-cell–like representations were also related to increased hippocampal activity, potentially reflecting compensatory mechanisms that prevent overt spatial memory impairment in APOE-ε4 carriers. Our results provide evidence of behaviorally relevant entorhinal dysfunction in humans at genetic risk for AD, decades before potential disease onset.
Wednesday, November 04, 2015
Lifting weights and the brain.
Reynolds points to a study suggesting that light weight training slows down the shrinkage and tattering of our brain's white matter (nerve tracts) that normally occurs with aging. And, from the New Yorker:
Tuesday, November 03, 2015
Brain Pickings on 'the most important things'
I enjoy the weekly email sent out by Maria Popova's Brain Pickings website. I find it a bit overwhelming (and high on the estrogens), and so sample only a few of the idea chunks it presents. I suggest you have a look. On its 9th birthday, Brain Pickings noted the "9 most important things I have learned":
1. Allow yourself the uncomfortable luxury of changing your mind.
2. Do nothing for prestige or status or money or approval alone.
3. Be generous.
4. Build pockets of stillness into your life.
5. When people try to tell you who you are, don’t believe them.
6. Presence is far more intricate and rewarding an art than productivity.
7. Expect anything worthwhile to take a long time.
8. Seek out what magnifies your spirit.
9. Don’t be afraid to be an idealist.
Blog Categories:
culture/politics,
happiness,
self,
self help
Monday, November 02, 2015
A lab experiment: visibility of wealth increases wealth inequality
Nishi et al. do a fascinating laboratory experiment conducted online showing that when people can see wealth inequality in their social network, this propels further inequality through reduced cooperation and reduced social connectivity. From a summary by Gächter
Nishi and colleagues' experimental model used an assessment of people's willingness to contribute to public goods to test how initial wealth inequality and the structure of the social network influence the evolution of inequality...can mere observation of your neighbour's wealth lead to more inequality over time, even if such information does not change economic incentives? Visible wealth might have a psychological effect by triggering social comparisons and thereby influencing economic choices that have repercussions for inequality.
...the researchers endowed all participants with tokens, worth real money...in a treatment without inequality, all participants initially received the same number of tokens; in a low-inequality treatment, participants had similar but different initial endowments; and in the high-inequality treatment there was a substantial starting difference between participants...A crucial manipulation in this experiment was wealth visibility. Under invisible conditions, the participants could observe only their own accumulated wealth. Under visibility, they could see the accumulated wealth of their connected neighbours but not the whole network....
The groups typically comprised 17 people arranged at random in a social network in which, on average, about 5 people were linked ('neighbours'). In each of the 10 rounds of the following game, participants had to decide whether to behave pro-socially ('cooperate') by reducing their own wealth by 50 tokens per connected neighbour to benefit each of them by 100 tokens, or to behave pro-selfishly ('defect') by keeping their tokens for themselves. These decisions had consequences for accumulated wealth levels and inequality. At the end of each round, the subjects learnt whether their neighbours had cooperated or defected and 30% of participants were given the opportunity to change their neighbour, that is, to either sever an existing link or to create a new one.
The authors find that, under high initial wealth inequality, visibility of neighbours' accumulated wealth increases inequality over time relative to the invisibility condition.Here is the abstract from Nishi et al.:
Humans prefer relatively equal distributions of resources, yet societies have varying degrees of economic inequality. To investigate some of the possible determinants and consequences of inequality, here we perform experiments involving a networked public goods game in which subjects interact and gain or lose wealth. Subjects (n = 1,462) were randomly assigned to have higher or lower initial endowments, and were embedded within social networks with three levels of economic inequality (Gini coefficient = 0.0, 0.2, and 0.4). In addition, we manipulated the visibility of the wealth of network neighbours. We show that wealth visibility facilitates the downstream consequences of initial inequality—in initially more unequal situations, wealth visibility leads to greater inequality than when wealth is invisible. This result reflects a heterogeneous response to visibility in richer versus poorer subjects. We also find that making wealth visible has adverse welfare consequences, yielding lower levels of overall cooperation, inter-connectedness, and wealth. High initial levels of economic inequality alone, however, have relatively few deleterious welfare effects.
Friday, October 30, 2015
More exercise correlates with younger body cells.
Reynolds points to work by Loprinzi et al. showing physicaly active people have longer telomeres at the end of their chromosomes' DNA strands than sedentary people. (A telomere is a region of repetitive nucleotide sequences at each end of a chromatid, which protects the end of the chromosome from deterioration from from fusion with neighboring chromosomes. It's length is a measure of a cell's biological age because it naturally shortens and frays with age.) Here is their abstract, complete with three (unnecessary) abbreviations, LTL (leukocyte telomere length), PA (physical activity) and MBB (movement based behaviors), that you will have to keep in your short term memory for a few seconds:
INTRODUCTION: Short leukocyte telomere length (LTL) has become a hallmark characteristic of aging. Some, but not all, evidence suggests that physical activity (PA) may play an important role in attenuating age-related diseases and may provide a protective effect for telomeres. The purpose of this study was to examine the association between PA and LTL in a national sample of US adults from the National Health and Nutrition Examination Survey.
METHODS: National Health and Nutrition Examination Survey data from 1999 to 2002 (n = 6503; 20-84 yr) were used. Four self-report questions related to movement-based behaviors (MBB) were assessed. The four MBB included whether individuals participated in moderate-intensity PA, vigorous-intensity PA, walking/cycling for transportation, and muscle-strengthening activities. An MBB index variable was created by summing the number of MBB an individual engaged in (range, 0-4).
RESULTS: A clear dose-response relation was observed between MBB and LTL; across the LTL tertiles, respectively, the mean numbers of MBB were 1.18, 1.44, and 1.54 (Ptrend less than 0.001). After adjustments (including age) and compared with those engaging in 0 MBB, those engaging in 1, 2, 3, and 4 MBB, respectively, had a 3% (P = 0.84), 24% (P = 0.02), 29% (P = 0.04), and 52% (P = 0.004) reduced odds of being in the lowest (vs highest) tertile of LTL; MBB was not associated with being in the middle (vs highest) tertile of LTL.
CONCLUSIONS: Greater engagement in MBB was associated with reduced odds of being in the lowest LTL tertile.
Thursday, October 29, 2015
Low-power people are more trusting in social exchange.
Schilke et al. make observations suggestion that low-power individuals want high-power people they interact with to be trustworthy, and act according to that desire:
How does lacking vs. possessing power in a social exchange affect people’s trust in their exchange partner? An answer to this question has broad implications for a number of exchange settings in which dependence plays an important role. Here, we report on a series of experiments in which we manipulated participants’ power position in terms of structural dependence and observed their trust perceptions and behaviors. Over a variety of different experimental paradigms and measures, we find that more powerful actors place less trust in others than less powerful actors do. Our results contradict predictions by rational actor models, which assume that low-power individuals are able to anticipate that a more powerful exchange partner will place little value on the relationship with them, thus tends to behave opportunistically, and consequently cannot be trusted. Conversely, our results support predictions by motivated cognition theory, which posits that low-power individuals want their exchange partner to be trustworthy and then act according to that desire. Mediation analyses show that, consistent with the motivated cognition account, having low power increases individuals’ hope and, in turn, their perceptions of their exchange partners’ benevolence, which ultimately leads them to trust.
Wednesday, October 28, 2015
How much sleep do we really need?
A study by Yetish et al. casts fascinating light on the widespread idea that a large fraction of us in modern industrial societies are sleep-deprived, going to bed later than is "natural" and sleeping less than our bodies need. They monitored the sleep patterns of three hunter-gatherer cultures in Bolivia, Tanzania, and South Africa. Here is their summary:
Highlights
Highlights
•Preindustrial societies in Tanzania, Namibia, and Bolivia show similar sleep parametersSummary
•They do not sleep more than “modern” humans, with average durations of 5.7–7.1 hr
•They go to sleep several hours after sunset and typically awaken before sunrise
•Temperature appears to be a major regulator of human sleep duration and timing
How did humans sleep before the modern era? Because the tools to measure sleep under natural conditions were developed long after the invention of the electric devices suspected of delaying and reducing sleep, we investigated sleep in three preindustrial societies. We find that all three show similar sleep organization, suggesting that they express core human sleep patterns, most likely characteristic of pre-modern era Homo sapiens. Sleep periods, the times from onset to offset, averaged 6.9–8.5 hr, with sleep durations of 5.7–7.1 hr, amounts near the low end of those industrial societies. There was a difference of nearly 1 hr between summer and winter sleep. Daily variation in sleep duration was strongly linked to time of onset, rather than offset. None of these groups began sleep near sunset, onset occurring, on average, 3.3 hr after sunset. Awakening was usually before sunrise. The sleep period consistently occurred during the nighttime period of falling environmental temperature, was not interrupted by extended periods of waking, and terminated, with vasoconstriction, near the nadir of daily ambient temperature. The daily cycle of temperature change, largely eliminated from modern sleep environments, may be a potent natural regulator of sleep. Light exposure was maximal in the morning and greatly decreased at noon, indicating that all three groups seek shade at midday and that light activation of the suprachiasmatic nucleus is maximal in the morning. Napping occurred on fewer than 7% of days in winter and fewer than 22% of days in summer. Mimicking aspects of the natural environment might be effective in treating certain modern sleep disorders.
Tuesday, October 27, 2015
Chilling down our religiousity and intolerance with some magnets?
A group of collaborators has used transcranial magnetic stimulation to dial down activity in the area of the posterior medial frontal cortex (pMFC)that evaluates threats and plans responses. A group of subjects who had undergone this procedure expressed less bias against immigrants and also less belief in God than a group that received a sham TMS treatment.
People cleave to ideological convictions with greater intensity in the aftermath of threat. The posterior medial frontal cortex (pMFC) plays a key role in both detecting discrepancies between desired and current conditions and adjusting subsequent behavior to resolve such conflicts. Building on prior literature examining the role of the pMFC in shifts in relatively low-level decision processes, we demonstrate that the pMFC mediates adjustments in adherence to political and religious ideologies. We presented participants with a reminder of death and a critique of their in-group ostensibly written by a member of an out-group, then experimentally decreased both avowed belief in God and out-group derogation by downregulating pMFC activity via transcranial magnetic stimulation. The results provide the first evidence that group prejudice and religious belief are susceptible to targeted neuromodulation, and point to a shared cognitive mechanism underlying concrete and abstract decision processes. We discuss the implications of these findings for further research characterizing the cognitive and affective mechanisms at play.
Monday, October 26, 2015
The hippocampus is essential for recall but not for recognition.
From Patai et al:
Which specific memory functions are dependent on the hippocampus is still debated. The availability of a large cohort of patients who had sustained relatively selective hippocampal damage early in life enabled us to determine which type of mnemonic deficit showed a correlation with extent of hippocampal injury. We assessed our patient cohort on a test that provides measures of recognition and recall that are equated for difficulty and found that the patients' performance on the recall tests correlated significantly with their hippocampal volumes, whereas their performance on the equally difficult recognition tests did not and, indeed, was largely unaffected regardless of extent of hippocampal atrophy. The results provide new evidence in favor of the view that the hippocampus is essential for recall but not for recognition.
Friday, October 23, 2015
Brain activity associated with predicting rewards to others.
Lockwood et al. make the interesting observation that a subregion of the anterior cingulate cortex shows specialization for processing others' versus one's own rewards.
Empathy—the capacity to understand and resonate with the experiences of others—can depend on the ability to predict when others are likely to receive rewards. However, although a plethora of research has examined the neural basis of predictions about the likelihood of receiving rewards ourselves, very little is known about the mechanisms that underpin variability in vicarious reward prediction. Human neuroimaging and nonhuman primate studies suggest that a subregion of the anterior cingulate cortex in the gyrus (ACCg) is engaged when others receive rewards. Does the ACCg show specialization for processing predictions about others' rewards and not one's own and does this specialization vary with empathic abilities? We examined hemodynamic responses in the human brain time-locked to cues that were predictive of a high or low probability of a reward either for the subject themselves or another person. We found that the ACCg robustly signaled the likelihood of a reward being delivered to another. In addition, ACCg response significantly covaried with trait emotion contagion, a necessary foundation for empathizing with other individuals. In individuals high in emotion contagion, the ACCg was specialized for processing others' rewards exclusively, but for those low in emotion contagion, this region also responded to information about the subject's own rewards. Our results are the first to show that the ACCg signals probabilistic predictions about rewards for other people and that the substantial individual variability in the degree to which the ACCg is specialized for processing others' rewards is related to trait empathy.
Thursday, October 22, 2015
Drugs or therapy for depression?
I want to pass on a few clips from a piece by Friedman, summarizing work by Mayberg and collaborators at Emory University, who looked for brain activity that might predict whether a depressed patient would respond better to psychotherapy or antidepressant medication:
Using PET scans, she randomized a group of depressed patients to either 12 weeks of treatment with the S.S.R.I. antidepressant Lexapro or to cognitive behavior therapy, which teaches patients to correct their negative and distorted thinking.
Over all, about 40 percent of the depressed subjects responded to either treatment. But Dr. Mayberg found striking brain differences between patients who did well with Lexapro compared with cognitive behavior therapy, and vice versa. Patients who had low activity in a brain region called the anterior insula measured before treatment responded quite well to C.B.T. but poorly to Lexapro; conversely, those with high activity in this region had an excellent response to Lexapro, but did poorly with C.B.T.
We know that the insula is centrally involved in the capacity for emotional self-awareness, cognitive control and decision making, all of which are impaired by depression. Perhaps cognitive behavior therapy has a more powerful effect than an antidepressant in patients with an underactive insula because it teaches patients to control their emotionally disturbing thoughts in a way that an antidepressant cannot.
These neurobiological differences may also have important implications for treatment, because for most forms of depression, there is little evidence to support one form of treatment over another...Currently, doctors typically prescribe antidepressants on a trial-and-error basis, selecting or adding one antidepressant after another when a patient fails to respond to the first treatment. Rarely does a clinician switch to an empirically proven psychotherapy like cognitive behavior therapy after a patient fails to respond to medication, although these data suggest this might be just the right strategy. One day soon, we may be able to quickly scan a patient with an M.R.I. or PET, check the brain activity “fingerprint” and select an antidepressant or psychotherapy accordingly.
Is the nonspecific nature of talk therapy — feeling understood and cared for by another human being — responsible for its therapeutic effect? Or will specific types of therapy — like C.B.T. or interpersonal or psychodynamic therapy — show distinctly different clinical and neurobiological effects for various psychiatric disorders?...Right now we don’t have a clue, in part because of the current research funding priorities of the National Institute of Mental Health, which strongly favors brain science over psychosocial treatments. But these are important questions, and we owe it to our patients to try to answer them.
Wednesday, October 21, 2015
Hoopla over a bit of rat brain…a complete brain simulation?
A vastly expensive and heavily marketed international collaborative "Blue Brain Project (BBP)" has now reported its first digital reconstruction of a slice of rat somatosensory cortex, the most complete simulation of a piece of excitable brain matter to date (still, a speck of tissue compared to the human brain, which is two million times larger). I, along with a chorus of critics, can not see how a static depiction and reconstruction of a cortical column (~30,000 neurons, ~40 million synapses) is anything but a waste of money. The biological reality is that those neurons and synapses are not just sitting there, with static components cranking away like the innards of a computer. The wiring is plastic, constantly changing as axons, dendrites, and synapses both grow and retract, changing the number and kind of their connections over which information flows.
Koch and Buice make the generous point that all this might not matter if one could devise the biological equivalent of Alan Turing's Imitation game, seeing if an observer could tell whether output they observe for a given input is being generated by the simulation or by electrical recording from living tissue. Here are some interesting clips from their article in Cell.
Koch and Buice make the generous point that all this might not matter if one could devise the biological equivalent of Alan Turing's Imitation game, seeing if an observer could tell whether output they observe for a given input is being generated by the simulation or by electrical recording from living tissue. Here are some interesting clips from their article in Cell.
...the current BBP model stops with the continuous and deterministic Hodgkin-Huxley currents...And therein lies an important lesson. If the real and the synthetic can’t be distinguished at the level of firing rate activity (even though it is uncontroversial that spiking is caused by the concerted action of tens of thousands of ionic channel proteins), the molecular level of granularity would appear to be irrelevant to explain electrical activity. Teasing out which mechanisms contribute to any specific phenomena is essential to what is meant by understanding.
Markram et al. claim that their results point to the minimal datasets required to model cortex. However, we are not aware of any rigorous argument in the present triptych of manuscripts, specifying the relevant level of granularity. For instance, are active dendrites, such as those of the tall, layer 5 pyramidal cells, essential? Could they be removed without any noticeable effect? Why not replace the continuous, macroscopic, and deterministic HH equations with stochastic Markov models of thousands of tiny channel conductances? Indeed, why not consider quantum mechanical levels of descriptions? Presumably, the latter two avenues have not been chosen because of their computational burden and the intuition that they are unlikely to be relevant. The Imitation Game offers a principled way of addressing these important questions: only add a mechanism if its impact on a specific set of measurables can be assessed by a trained observer.
Consider the problem of numerical weather prediction and climate modeling, tasks whose physico-chemical and computational complexity is comparable to whole-brain modeling. Planet-wide simulations that cover timescales from hours to decades require a deep understanding of how physical systems interact across multiple scales and careful choices about the scale at which different phenomena are modeled. This has led to an impressive increase in predictive power since 1950, when the first such computer calculations were performed. Of course, a key difference between weather prediction and whole-brain simulation is that the former has a very specific and quantifiable scientific question (to wit: “is it going to rain tomorrow?”). The BBP has created an impressive initial scaffold that will facilitate asking these kinds of questions for brains.
Blog Categories:
attention/perception,
brain plasticity,
technology
Tuesday, October 20, 2015
Meditation madness
Adam Grant does a NYTimes Op-Ed piece that mirrors some of my own sentiments about the current meditation craze. There would seem to be almost nothing that practicing meditation doesn't enhance (ingrown toenails?) I'm fascinated by what studies on meditation have told us about how the mind works, and MindBlog has done many posts on the topic (click the meditation link under 'selected blog categories' in the right column.) I and many others personally find it very useful in maintaining a calm and focused mind. But.... it is not a universal panacea, and many of its effects can be accomplished, as Grant points out, by other means. (By the way, a Wisconsin colleague of mine who has assisted in a number of the meditation studies conducted by Richard Davidson and collaborators at the University of Wisconsin feels that people who engage meditation regimes display more depressive behaviors after a period of time.) Some clips from Grant's screed:
...Every benefit of the practice can be gained through other activities...This is the conclusion from an analysis of 47 trials of meditation programs, published last year in JAMA Internal Medicine: “We found no evidence that meditation programs were better than any active treatment (i.e., drugs, exercise and other behavioral therapies).”
O.K., so meditation is just one of many ways to fight stress. But there’s another major benefit of meditating: It makes you mindful. After meditating, people are more likely to focus their attention in the present. But as the neuroscientist Richard Davidson and the psychologist Alfred Kaszniak recently lamented, “There are still very few methodologically rigorous studies that demonstrate the efficacy of mindfulness-based interventions in either the treatment of specific diseases or in the promotion of well-being.”
And guess what? You don’t need to meditate to achieve mindfulness either...you can become more mindful by thinking in conditionals instead of absolutes...Change “is” to “could be,” and you become more mindful. The same is true when you look for an answer rather than the answer.(I would also point out that 'mindfulness' can frequently be generated by switching in your thoughts from a first to a third person perspective.) Finally:
...in some situations, meditation may be harmful: Willoughby Britton, a Brown University Medical School professor, has discovered numerous cases of traumatic meditation experiences that intensify anxiety, reduce focus and drive, and leave people feeling incapacitated.
Monday, October 19, 2015
A brain switch that can make the familiar seem new?
We all face the issue how to refresh and renew our energy and perspective after our brains have adapted, habituated, or densensitized to an ongoing interest or activity that lost its novelty. As I engage my long term interests in piano performance and studying how our minds work, I wish I could throw a "reset" switch in my brain that would let me approach the material as if it were new again. Ho et al. appear to have found such a switch, in the perirhinal cortex of rats, that regulates whether images are perceived as familiar or novel:
Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally.Unfortunately, given that rather fancy optogenetic methods were used to vary oscillatory activity in the perirhinal cortex, no human applications of this work are imminent.
Blog Categories:
attention/perception,
brain plasticity,
technology
Sunday, October 18, 2015
Sir Reginald's Marvellous Organ
Under the "random curious stuff" category noted in MindBlog's title, above, I can't resist passing on this naughty video sent by a friend...apologies to sensitive readers who only want the brain stuff.
Friday, October 16, 2015
Great apes can look ahead in time
Yet another supposed distinction between human and animal minds has bit the dust. The prevailing dogma (expressed in my talk "The Beast Within") has been that animals don't anticipate the future. Now Kano and Hirata show that chimpanzees remember a movie they viewed a day earlier, because when the movie is shown again their eyes move to a part of the screen where an action is about to happen that is relevant to the storyline.
Highlights
•We developed a novel eye-tracking task to examine great apes’ memory skills
•Apes watched the same videos twice across 2 days, with a 24-hr delay
•Apes made anticipatory looks based on where-what information on the second day
•Apes thus encoded ongoing events into long-term memory by single experiences
Summary
Everyday life poses a continuous challenge for individuals to encode ongoing events, retrieve past events, and predict impending events. Attention and eye movements reflect such online cognitive and memory processes, especially through “anticipatory looks”. Previous studies have demonstrated the ability of nonhuman animals to retrieve detailed information about single events that happened in the distant past. However, no study has tested whether nonhuman animals employ online memory processes, in which they encode ongoing movie-like events into long-term storage during single viewing experiences. Here, we developed a novel eye-tracking task to examine great apes’ anticipatory looks to the events that they had encountered one time 24 hr earlier. Half-minute movie clips depicted novel and potentially alarming situations to the participant apes (six bonobos, six chimpanzees). In the experiment 1 clip, an aggressive ape-like character came out from one of two identical doors. While viewing the same movie again, apes anticipatorily looked at the door where the character would show up. In the experiment 2 clip, the human actor grabbed one of two objects and attacked the character with it. While viewing the same movie again but with object-location switched, apes anticipatorily looked at the object that the human would use, rather than the former location of the object. Our results thus show that great apes, just by watching the events once, encoded particular information (location and content) into long-term memory and later retrieved that information at a particular time in anticipation of the impending events.
Highlights
•We developed a novel eye-tracking task to examine great apes’ memory skills
•Apes watched the same videos twice across 2 days, with a 24-hr delay
•Apes made anticipatory looks based on where-what information on the second day
•Apes thus encoded ongoing events into long-term memory by single experiences
Summary
Everyday life poses a continuous challenge for individuals to encode ongoing events, retrieve past events, and predict impending events. Attention and eye movements reflect such online cognitive and memory processes, especially through “anticipatory looks”. Previous studies have demonstrated the ability of nonhuman animals to retrieve detailed information about single events that happened in the distant past. However, no study has tested whether nonhuman animals employ online memory processes, in which they encode ongoing movie-like events into long-term storage during single viewing experiences. Here, we developed a novel eye-tracking task to examine great apes’ anticipatory looks to the events that they had encountered one time 24 hr earlier. Half-minute movie clips depicted novel and potentially alarming situations to the participant apes (six bonobos, six chimpanzees). In the experiment 1 clip, an aggressive ape-like character came out from one of two identical doors. While viewing the same movie again, apes anticipatorily looked at the door where the character would show up. In the experiment 2 clip, the human actor grabbed one of two objects and attacked the character with it. While viewing the same movie again but with object-location switched, apes anticipatorily looked at the object that the human would use, rather than the former location of the object. Our results thus show that great apes, just by watching the events once, encoded particular information (location and content) into long-term memory and later retrieved that information at a particular time in anticipation of the impending events.
Blog Categories:
animal behavior,
future,
memory/learning
Thursday, October 15, 2015
Rhodopsin curing blindness?
In a previous life (1962-1998) my laboratory studied how the rhodopsin visual pigment in our eyes changes light into a nerve signal. Thus it excites me when I see major advances in understanding our vision and curing visual diseases. I want to pass on a nice graphic offered by Van Gelder and Kaur to illustrate recent work of Cehajic-Kapetanovic et al. (open access) showing that introduction of the visual pigment rhodopsin by viral gene therapy into the inner retina nerve cells of retinas whose rods and cones have degenerated can restore light sensitivity and can restore vision-like physiology and behavior to mice blind from outer retinal degeneration:
(click figure to enlarge) Gene therapy rescue of vision in retinal degeneration. (A) In the healthy retina, light penetrates from inner to outer retina to reach the cones and rods, which transduce signals through horizontal, bipolar, amacrine, and ultimately retinal ganglion cells to the brain. (B) In outer retinal degenerative diseases, loss of photoreceptors renders the retina insensitive to light. (C) Gene therapy with AAV2/2 virus expressing human rhodopsin (hRod) under the control of the CAG promoter results in expression of the photopigment in many surviving cells of the inner retina, and results in restoration of light responses recognized by the brain. (D) More selective expression of rhodopsin in a subset of bipolar cells is achieved by use of a virus in which expression is driven by the grm6 promoter. This version appeared to restore the most natural visual function to blind mice.
Wednesday, October 14, 2015
Can epigenetics explain homosexuality?
Michael Balter notes work presented by Vilain's UCLA laboratory at this year's American Society of Human Genetics meeting. His abstract, followed by some clips of his text:
(added note: an alert reader, see comment below, just added this critique of the following work from The Atlantic)
(added note: an alert reader, see comment below, just added this critique of the following work from The Atlantic)
A new study suggests that epigenetic effects—chemical modifications of the human genome that alter gene activity without changing the DNA sequence—may sometimes influence sexual orientation. Researchers studied methylation, the attachment of a methyl group to specific regions of DNA, in 37 pairs of male identical twins who were discordant—meaning that one was gay and the other straight—and 10 pairs who were both gay. Their search yielded five genome regions where the methylation pattern appears very closely linked to sexual orientation. A model that predicted sexual orientation based on these patterns was almost 70% accurate within this group—although that predictive ability does not necessarily apply to the general population.
Researchers thought they were hot on the trail of “gay genes” in 1993, when a team led by geneticist Dean Hamer of the National Cancer Institute reported that one or more genes for homosexuality had to reside on Xq28, a large region on the X chromosome...but some teams were unable to replicate the findings and the actual genes have not been found...Twin studies suggested, moreover, that gene sequences can't be the full explanation. For example, the identical twin of a gay man, despite having the same genome, only has a 20% to 50% chance of being gay himself.
That's why some have suggested that epigenetics—instead of or in addition to traditional genetics—might be involved. During development, chromosomes are subject to chemical changes that don't affect the nucleotide sequence but can turn genes on or off; the best known example is methylation, in which a methyl group is attached to specific DNA regions. Such “epi-marks” can remain in place for a lifetime, but most are erased when eggs and sperm are produced, so that a fetus starts with a blank slate. Recent studies, however, have shown that some marks are passed on to the next generation.
In a 2012 paper, Rice and his colleagues suggested that such unerased epi-marks might cause homosexuality when they are passed on from father to daughter or from mother to son...Such ideas inspired Tuck Ngun, a postdoc in Vilain's lab, to study the methylation patterns at 140,000 regions in the DNA of 37 pairs of male identical twins who were discordant—meaning that one was gay and the other straight—and 10 pairs who were both gay...the team identified five regions in the genome where the methylation pattern appears very closely linked to sexual orientation...Just why identical twins sometimes end up with different methylation patterns isn't clear. If Rice's hypothesis is right, their mothers' epi-marks might have been erased in one son, but not the other; or perhaps neither inherited any marks but one of them picked them up in the womb...In an earlier review, Ngun and Vilain cited evidence that methylation may be determined by subtle differences in the environment each fetus experiences during gestation, such as their exact locations within the womb and how much of the maternal blood supply each receives.
Tuesday, October 13, 2015
Musical expertise changes the brain's functional connectivity during audiovisual integration
Music notation reading encapsulates auditory, visual, and motor information in a highly organized manner and therefore provides a useful model for studying multisensory phenomena. Paraskevopoulos et al. show that large-scale functional brain networks underpinning audiovisual integration are organized differently in musicians and nonmusicians. They examine brain responses to congruent (sound played corresponding to musical notation) and incongruent (sound played different from notation) stimuli.
Multisensory integration engages distributed cortical areas and is thought to emerge from their dynamic interplay. Nevertheless, large-scale cortical networks underpinning audiovisual perception have remained undiscovered. The present study uses magnetoencephalography and a methodological approach to perform whole-brain connectivity analysis and reveals, for the first time to our knowledge, the cortical network related to multisensory perception. The long-term training-related reorganization of this network was investigated by comparing musicians to nonmusicians. Results indicate that nonmusicians rely on processing visual clues for the integration of audiovisual information, whereas musicians use a denser cortical network that relies mostly on the corresponding auditory information. These data provide strong evidence that cortical connectivity is reorganized due to expertise in a relevant cognitive domain, indicating training-related neuroplasticity.
Figure - Paradigm of an audiovisual congruent and incongruent trial. (A) A congruent trial. (B) An incongruent trial. The line “time” represents the duration of the presentation of the auditory and visual part of the stimulus. The last picture of each trial represents the intertrial stimulus in which subjects had to answer if the trial was congruent or incongruent.
Figure - Cortical network underpinning audiovisual integration. (Upper) Statistical parametric maps of the significant networks for the congruent > incongruent comparison. Networks presented are significant at P less than 0.001, FDR corrected. The color scale indicates t values. (Lower) Node strength of the significant networks for each comparison. Strength is represented by node size.
Blog Categories:
attention/perception,
brain plasticity,
music
Monday, October 12, 2015
Runner's high? Thank your internal marijuana...
From Fuss et al.:
Exercise is rewarding, and long-distance runners have described a runner’s high as a sudden pleasant feeling of euphoria, anxiolysis, sedation, and analgesia. A popular belief has been that endogenous endorphins mediate these beneficial effects. However, running exercise increases blood levels of both β-endorphin (an opioid) and anandamide (an endocannabinoid). Using a combination of pharmacologic, molecular genetic, and behavioral studies in mice, we demonstrate that cannabinoid receptors mediate acute anxiolysis and analgesia after running. We show that anxiolysis depends on intact cannabinoid receptor 1 (CB1) receptors on forebrain GABAergic neurons and pain reduction on activation of peripheral CB1 and CB2 receptors. We thus demonstrate that the endocannabinoid system is crucial for two main aspects of a runner's high. Sedation, in contrast, was not influenced by cannabinoid or opioid receptor blockage, and euphoria cannot be studied in mouse models.
Friday, October 09, 2015
A Gee Whiz! moment. Activating neurons with ultrasound.
Optogenetics, making nerve cells sensitive to light by a genetic manipulation, has the limitation that light doesn't penetrate living tissue very well, and so must be delivered by a invasive thin fiber optic stimulation. Frank and Gorman offer a video clip describing work of Ibsen et al., who show a nerve cell can be genetically altered to become sensitive to activation by non-invasive ultrasound, an approach they described as "sonogenetics." The video (I could do without the rock music sound track) shows a worm's movement changing direction as a nerve cell is stimulated by ultrasound.
Thursday, October 08, 2015
1/f brain noise increases with aging.
From Gazzaley and collaborators, a description of what in going on in our aging brains:
Aging is associated with performance decrements across multiple cognitive domains. The neural noise hypothesis, a dominant view of the basis of this decline, posits that aging is accompanied by an increase in spontaneous, noisy baseline neural activity. Here we analyze data from two different groups of human subjects: intracranial electrocorticography from 15 participants over a 38 year age range (15–53 years) and scalp EEG data from healthy younger (20–30 years) and older (60–70 years) adults to test the neural noise hypothesis from a 1/f noise perspective. Many natural phenomena, including electrophysiology, are characterized by 1/f noise. The defining characteristic of 1/f is that the power of the signal frequency content decreases rapidly as a function of the frequency (f) itself. The slope of this decay, the noise exponent (χ), is often <−1 for electrophysiological data and has been shown to approach white noise (defined as χ = 0) with increasing task difficulty. We observed, in both electrophysiological datasets, that aging is associated with a flatter (more noisy) 1/f power spectral density, even at rest, and that visual cortical 1/f noise statistically mediates age-related impairments in visual working memory. These results provide electrophysiological support for the neural noise hypothesis of aging.
Wednesday, October 07, 2015
Methionine, an amino acid, enhances recovery from cocaine addiction.
Wright et al. use a mouse model to show that the common amino acid methionine - which can serve as a methyl group donor for the DNA methylation that regulates neural functions associated with learning, memory, and synaptic plasticity - can reduce addictive like behaviors such as drug seeking, and block a cocaine-induced marker of neuronal activation after reinstatement in the nucleus accumbens and the medial prefrontal cortex, two brain regions responsible for drug seeking and relapse. Here is the technical abstract:
Epigenetic mechanisms, such as histone modifications, regulate responsiveness to drugs of abuse, such as cocaine, but relatively little is known about the regulation of addictive-like behaviors by DNA methylation. To investigate the influence of DNA methylation on the locomotor-activating effects of cocaine and on drug-seeking behavior, rats receiving methyl supplementation via chronic L-methionine (MET) underwent either a sensitization regimen of intermittent cocaine injections or intravenous self-administration of cocaine, followed by cue-induced and drug-primed reinstatement. MET blocked sensitization to the locomotor-activating effects of cocaine and attenuated drug-primed reinstatement, with no effect on cue-induced reinstatement or sucrose self-administration and reinstatement. Furthermore, upregulation of DNA methyltransferase 3a and 3b and global DNA hypomethylation were observed in the nucleus accumbens core (NAc), but not in the medial prefrontal cortex (mPFC), of cocaine-pretreated rats. Glutamatergic projections from the mPFC to the NAc are critically involved in the regulation of cocaine-primed reinstatement, and activation of both brain regions is seen in human addicts when reexposed to the drug. When compared with vehicle-pretreated rats, the immediate early gene c-Fos (a marker of neuronal activation) was upregulated in the NAc and mPFC of cocaine-pretreated rats after cocaine-primed reinstatement, and chronic MET treatment blocked its induction in both regions. Cocaine-induced c-Fos expression in the NAc was associated with reduced methylation at CpG dinucleotides in the c-Fos gene promoter, effects reversed by MET treatment. Overall, these data suggest that drug-seeking behaviors are, in part, attributable to a DNA methylation-dependent process, likely occurring at specific gene loci (e.g., c-Fos) in the reward pathway.
Tuesday, October 06, 2015
Memory aging and brain maintenance
An open access article by Nyberg et al. notes
The association of intact memory functioning in old age with maintenance and preservation of a functionally young and healthy brain may seem obvious. However, up to the present the focus has largely been on possible forms of compensatory brain responses. This is so, even though it remains unclear whether memory performance in old age can benefit from altered patterns of brain activation, with almost as many studies showing positive as negative relationships.Their abstract suggests the relevance of "brain maintenance":
Episodic memory and working memory decline with advancing age. Nevertheless, large-scale population-based studies document well-preserved memory functioning in some older individuals. The influential ‘reserve’ notion holds that individual differences in brain characteristics or in the manner people process tasks allow some individuals to cope better than others with brain pathology and hence show preserved memory performance. Here, we discuss a complementary concept, that of brain maintenance (or relative lack of brain pathology), and argue that it constitutes the primary determinant of successful memory aging. We discuss evidence for brain maintenance at different levels: cellular, neurochemical, gray- and white-matter integrity, and systems-level activation patterns. Various genetic and lifestyle factors support brain maintenance in aging and interventions may be designed to promote maintenance of brain structure and function in late life.The figures are worth a look, for they illustrate how a fraction of older individuals have brains that, at different levels of brain organization, are similar to younger brains in their relative lack of brain pathology. They say very little about the "lifestyle factors" or "interventions" that might promote brain maintenance.
Monday, October 05, 2015
The wealthy are different from you and me...
The abstract from an article titled "The distributional preferences of an elite" by Fisman et al.:
We studied the distributional preferences of an elite cadre of Yale Law School students, a group that will assume positions of power in U.S. society. Our experimental design allows us to test whether redistributive decisions are consistent with utility maximization and to decompose underlying preferences into two qualitatively different tradeoffs: fair-mindedness versus self-interest, and equality versus efficiency. Yale Law School subjects are more consistent than subjects drawn from the American Life Panel, a diverse sample of Americans. Relative to the American Life Panel, Yale Law School subjects are also less fair-minded and substantially more efficiency-focused. We further show that our measure of equality-efficiency tradeoffs predicts Yale Law School students’ career choices: Equality-minded subjects are more likely to be employed at nonprofit organizations.
Subscribe to:
Posts (Atom)