During sleep, humans can strengthen previously acquired memories, but whether they can acquire entirely new information remains unknown. The nonverbal nature of the olfactory sniff response, in which pleasant odors drive stronger sniffs and unpleasant odors drive weaker sniffs, allowed us to test learning in humans during sleep. Using partial-reinforcement trace conditioning, we paired pleasant and unpleasant odors with different tones during sleep and then measured the sniff response to tones alone during the same nights' sleep and during ensuing wake. We found that sleeping subjects learned novel associations between tones and odors such that they then sniffed in response to tones alone. Moreover, these newly learned tone-induced sniffs differed according to the odor pleasantness that was previously associated with the tone during sleep. This acquired behavior persisted throughout the night and into ensuing wake, without later awareness of the learning process. Thus, humans learned new information during sleep.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Friday, June 07, 2013
We can learn new information during sleep.
Arzi et al. have devised a nice demonstration of how we can learn new information during our sleep. They paired pleasant and unpleasant odors with different tones during sleep, and measured the subjects’ sniffs to tones alone when they were awake. Tones associated with pleasant smells produced stronger sniffs, and tones associated with disgusting smells produced weaker sniffs, despite the subjects’ lack of awareness of the learning process. The abstract:
Thursday, June 06, 2013
Childhood self-control predicts health, wealth, and public safety
An international collaboration between researchers at universities in the US, UK, Canada, and New Zealand has generated this study, which speaks for itself (The participants are members of the Dunedin Multidisciplinary Health and Development Study, which tracks the development of 1,037 individuals born in 1972–1973 in Dunedin, New Zealand.) :
Policy-makers are considering large-scale programs aimed at self-control to improve citizens’ health and wealth and reduce crime. Experimental and economic studies suggest such programs could reap benefits. Yet, is self-control important for the health, wealth, and public safety of the population? Following a cohort of 1,000 children from birth to the age of 32 y, we show that childhood self-control predicts physical health, substance dependence, personal finances, and criminal offending outcomes, following a gradient of self-control. Effects of children's self-control could be disentangled from their intelligence and social class as well as from mistakes they made as adolescents. In another cohort of 500 sibling-pairs, the sibling with lower self-control had poorer outcomes, despite shared family background. Interventions addressing self-control might reduce a panoply of societal costs, save taxpayers money, and promote prosperity.
Self-control gradient. Children with low self-control had poorer health (A), more wealth problems (B), more single-parent child rearing (C), and more criminal convictions (D) than those with high self-control.
Blog Categories:
acting/choosing,
culture/politics,
social cognition
Wednesday, June 05, 2013
MindBlog starts up some summer music - a Poulenc Valse
Calling it summer music is being optimistic... it is still very chilly in Madison. I'm starting to select some pieces for an early fall musical at my Twin Valley home in Middleton, WI. This Poulenc valse is fun and bouncy, and gives me an excuse to relearn the techie side of mixing good quality audio with video.
The chemistry of protecting our brains by fasting.
Actually, I'm making a big assumption in the post title... namely that the results obtained by Gräff et al. in mice would extrapolate to similar finding in the human brain. In several animal models, a reduced consumption of calories seems to protect against cognitive deficits such as memory loss, in addition to acting on many different cell types and tissues to slow down aging. They found that caloric restriction effectively delays the onset of neurodegeneration and preserves structural and functional synaptic plasticity as well as memory capacities. Fasting activates the expression and activity of the nicotinamide adenine dinucleotide (NAD)–dependent protein deacetylase SIRT1, a known promoter of neuronal life span. (A deacetylase is an enzyme that cleaves acetate groups - think acetic acid or vinegar - from their attachment to proteins.) Surprisingly, this effect of reduced consumption of calories is mimicked by a small-molecule SIRT1-activating compound. (Just in case you were curious, the compound is SRT3657 [tertα-butyl 4-((2-(2-(6-(2-(tert-butoxycarbonyl(methyl)amino)ethylamino)-2-butylpyrimidine-4- carboxamido)phenyl)thiazolo[5,4-b]pyridin-6-yl)methoxy)piperidine-1-carboxylate])!! Mice treated with this substance recapitulated the beneficial effects of caloric restriction against neurodegeneration-associated pathologies. If this mechanism also applies to humans, SIRT1 may represent an appealing pharmacological target against neurodegeneration. Here is the abstract:
Caloric restriction (CR) is a dietary regimen known to promote lifespan by slowing down the occurrence of age-dependent diseases. The greatest risk factor for neurodegeneration in the brain is age, from which follows that CR might also attenuate the progressive loss of neurons that is often associated with impaired cognitive capacities. In this study, we used a transgenic mouse model that allows for a temporally and spatially controlled onset of neurodegeneration to test the potentially beneficial effects of CR. We found that in this model, CR significantly delayed the onset of neurodegeneration and synaptic loss and dysfunction, and thereby preserved cognitive capacities. Mechanistically, CR induced the expression of the known lifespan-regulating protein SIRT1, prompting us to test whether a pharmacological activation of SIRT1 might recapitulate CR. We found that oral administration of a SIRT1-activating compound essentially replicated the beneficial effects of CR. Thus, SIRT1-activating compounds might provide a pharmacological alternative to the regimen of CR against neurodegeneration and its associated ailments.
Monday, June 03, 2013
Long-term improvement of brain function and cognition with brain stimulation and cognitive training.
A group of collaborators from the University of Oxford and Innsbruck Medical University have published an observation that simple transcranial random noise stimulation (TRNS) of the bilateral (both sides of the brain) dorsolateral prefrontal cortex (DLPFC) applied during cognitive training over five days causes improvement in learning and performance of complex arithmetic tasks (both calculation and drill leaning) that still persist on testing 6 months later. This correlates with long lasting oxygenated blood flow changes measured by near-infrared spectroscopy that suggests more efficient neurovascular coupling within the left DLPFC. Here is their complete abstract:
Noninvasive brain stimulation has shown considerable promise for enhancing cognitive functions by the long-term manipulation of neuroplasticity. However, the observation of such improvements has been focused at the behavioral level, and enhancements largely restricted to the performance of basic tasks. Here, we investigate whether transcranial random noise stimulation (TRNS) can improve learning and subsequent performance on complex arithmetic tasks. TRNS of the bilateral dorsolateral prefrontal cortex (DLPFC), a key area in arithmetic , was uniquely coupled with near-infrared spectroscopy (NIRS) to measure online hemodynamic responses within the prefrontal cortex. Five consecutive days of TRNS-accompanied cognitive training enhanced the speed of both calculation- and memory-recall-based arithmetic learning. These behavioral improvements were associated with defined hemodynamic responses consistent with more efficient neurovascular coupling within the left DLPFC. Testing 6 months after training revealed long-lasting behavioral and physiological modifications in the stimulated group relative to sham controls for trained and nontrained calculation material. These results demonstrate that, depending on the learning regime, TRNS can induce long-term enhancement of cognitive and brain functions. Such findings have significant implications for basic and translational neuroscience, highlighting TRNS as a viable approach to enhancing learning and high-level cognition by the long-term modulation of neuroplasticity.For those of you who might well ask "How exactly is TRNS done?" here is a clip from their experimental procedures section. The photograph suggests a rather imposing device!:
Subjects received TRNS while performing the learning task each day. Two electrodes (5 cm × 5 cm) were positioned over areas of scalp corresponding to the DLPFC (F3 and F4, identified in accordance with the international 10-20 EEG procedure; see the figure). Electrodes were encased in saline-soaked synthetic sponges to improve contact with the scalp and avoid skin irritation. Stimulation was delivered by a DC-Stimulator-Plus device (DC-Stimulator-Plus, neuroConn). Noise in the high-frequency band (100–600Hz) was chosen as it elicits greater neural excitation than lower frequency stimulation. For the TRNS group, current was administered for 20 min, with 15 s increasing and decreasing ramps at the beginning and end, respectively, of each session of stimulation. In the sham group current was applied for 30 s after upward ramping and then terminated.
Thursday, May 30, 2013
When more support is less....
Finkel and Fitzsimons, whose work I mentioned in a post several years ago, do a review of studies showing that the children of parents who generously finance and regulate in detail their education ("helicopter parenting') make worse grades and feel less satisfied with their lives.
It seems that certain forms of help can dilute recipients’ sense of accountability for their own success. The college student might think: If Mom and Dad are always around to solve my problems, why spend three straight nights in the library during finals rather than hanging out with my friends?They reference their previous work (see MindBlog link above) showing that this effect generalizes to many 'helping' situations.
Women who thought about how their spouse was helpful with their health and fitness goals became less motivated to work hard to pursue those goals: relative to the control group, these women planned to spend one-third less time in the coming week pursuing their health and fitness goals.
....the problem: how can we help our children (and our spouses, friends and co-workers) achieve their goals without undermining their sense of personal accountability and motivation to achieve them?...The answer, research suggests, is that our help has to be responsive to the recipient’s circumstances: it must balance their need for support with their need for competence. We should restrain our urge to help unless the recipient truly needs it, and even then, we should calibrate it to complement rather than substitute for the recipient’s efforts.(I like to think that this would be a good description of how I ran my research laboratory, training graduate and post-doctoral students, for 30 years.) A final clip:
...providing help is most effective under a few conditions: when the recipient clearly needs it, when our help complements rather than replaces the recipient’s own efforts, and when it makes recipients feel that we’re comfortable having them depend on us.
So yes, by all means, parents, help your children. But don’t let your action replace their action. Support, don’t substitute. Your children will be more likely to achieve their goals — and, who knows, you might even find some time to get your own social life back on track.
Giant, Glowing Plastic Brain on Wheels
Even though I'm usually a curmudgeon about brain hype, Obama's Brain initiative, etc., I have to admire the enthusiasm and persistence of CUNY college senior Tyler Alterman, who as his senior thesis project, is trying to get a cognitive science lab on wheels on the road. He hopes that the mobile lab, dubbed The Think Tank, will help close the gender and race gap in STEM skills (science, technology, engineering and math), that are the key to good jobs, through hands-on psych and neuroscience learning at schools and museums. He has raised $32,000 from crowd funding, and hopes to raise the final $20,000 needed to put the mobile lab on the streets, in part through a public gala on June 18 at the Macaulay Honor College of CUNY.
Wednesday, May 29, 2013
How we work: 'brain waves' versus modern phrenology
Alexander et al. have analyzed data from magnetoencephalogram (MEG), electroencephalogram (EEG) and electrocorticogram (ECoG), focusing on globally synchronous fields in within-trial evoked brain activity. They quantified several signal components and compared topographies of activation across large-scale cortex. They found the topography of evoked responses was primarily a function of within-trial phase, and within-trial phase topography could be modeled as traveling waves. Traveling waves explained more signal than the trial-averaged phase topography. Here is a edited clip of explanation from Alexander:
The brain can be studied on various scales,..."You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG's (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas."
..."We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain's existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”
Each activity wave in the cerebral cortex is unique. "When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean."
Video: A wave of brain activity measured by the magnetic field it generates externally to the head. The left view of the head is shown on the left side of the image and the right view of the head on the right side of the image. This wave takes about 100 milliseconds to traverse the entire surface of the brain. The travelling wave originates on the lower-left of the head and travels to the lower front-right of the head. Most of the magnetic field shown in this video is generated by brain activity close to the surface of the cortex. The times displayed at the bottom are relative to the subject pressing a button at time zero. The colour scale shows the peak of the wave as hot colours and the trough of the wave as dark colours.
Monday, May 27, 2013
Training our ability to make decisions on uncertain outcomes.
When making decisions, we often retrieve a limited set of items from memory. These retrieved items provide evidence for competing options. For example, a dark cloud may elicit memories of heavy rains, leading one to pack an umbrella instead of sunglasses. Likewise, when viewing an X-ray, a radiologist may retrieve memories of similar X-rays from other patients. Whether or not these other patients have a tumor may provide evidence for or against the presence of a tumor in the current patient. Giguèrea and Love do an interesting study showing how people's ability to make accurate predictions of probabilistic outcomes can be improved if they are trained on an idealized version of a the distribution. They say it in their abstract as clearly as I can:
Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.Here are some clips from their text:
For probabilistic problems, such as determining whether a tumor is cancerous, whether it will rain, or whether a passenger is a security threat, selectively sampling memory at the time of decision makes it impossible for the learner to overcome uncertainty in the training domain. From a signal-detection perspective, selective sampling from memory results in noisy and inconsistent placement of the criterion across decision trials. Even with a perfect memory for all past experiences, a learner who selectively samples from memory will perform suboptimally on ambiguous category structures
Figure (A) Categories A (red curve) and B (green curve) are probabilistic, overlapping distributions. After experiencing many training items (denoted by the red A and green B letters), an optimal classifier places the decision criterion (dotted line) to maximize accuracy, and will classify all new test items to left of the criterion as A and all items to the right of the criterion as B. (B) Thus, the optimal classifier will always judge item S8 to be an A. In contrast, a model that stochastically and nonexhaustively samples similar items from memory may retrieve the three circled items and classify S8 as a B, which is not the most likely category. This sampling model will never achieve optimal performance when trained on ambiguous category structures. (C) Idealizing the category structures during training such that all items to the left of the criterion are labeled as A and to the right as B (underlined items are idealized) leads to optimal performance for both the optimal classifier and the selective sampling model.
Friday, May 24, 2013
Renewing our brain's ability to make decisions.
Our dopamine neurons, which enable enable our brains to make better choices, based on outcomes, gradually die off as part of the normal aging process. Chowdhury and colleagues have now found that increasing dopamine levels in the brain of healthy older participants increased the rate with which they learned from rewarding outcomes and changed activity in the striatum, a brain region that supports learning from rewards. To relate brain activity and behavior, they utilized fMRI, diffusion tensor imaging, reinforcement learning tasks, and computational models of behavior. Their data might suggest that some variant of the dopamine therapy used for Parkinson's disease patient, might help older people make decisions. Here is their more technical abstract:
Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA.
Wednesday, May 22, 2013
The limits of empathy
I thought I would follow up the Monday's post on well being, kindness, happiness and all that good stuff by noting a piece on how feel-good energy can lead us astray. Yale psychologist Paul Bloom has done an excellent article in the May 20 issue of the The New Yorker titled “The baby in the well - the limits of empathy.” Well meant feelings and actions of empathy can in some cases be counterproductive and blind us to more remote but statistically much more important hardships. Our evolved ability to feel what others are feeling (see numerous mindblog posts on mirror neurons, etc. ) is applied to very explicit and limited human situations, usually a specific individual (6 year old girl falls in well and nation focuses on watching the rescue) or defined and limited groups (mass shootings at Sandy Hook or Boston Marathon bombing). From Bloom:
In the past three decades, there were some sixty mass shootings, causing about five hundred deaths; that is, about one-tenth of one per cent of the homicides in America. But mass murders get splashed onto television screens, newspaper headlines, and the Web; the biggest ones settle into our collective memory —Columbine, Virginia Tech, Aurora, Sandy Hook. The 99.9 per cent of other homicides are, unless the victim is someone you’ve heard of, mere background noise.After noting how empathy research is thriving, and several books arguing that more empathy has to be a good thing (with Rifkin, in “The Empathic Civilization” (Penguin), wanting us to make the leap to “global empathic consciousness”), Bloom notes:
This enthusiasm may be misplaced, however. Empathy has some unfortunate features—it is parochial, narrow-minded, and innumerate. We’re often at our best when we’re smart enough not to rely on it......the key to engaging empathy is what has been called “the identifiable victim effect.” As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.”
You can see the effect in the lab. The psychologists Tehila Kogut and Ilana Ritov asked some subjects how much money they would give to help develop a drug that would save the life of one child, and asked others how much they would give to save eight children. The answers were about the same. But when Kogut and Ritov told a third group a child’s name and age, and showed her picture, the donations shot up—now there were far more to the one than to the eight.
In the broader context of humanitarianism, as critics like Linda Polman have pointed out, the empathetic reflex can lead us astray. When the perpetrators of violence profit from aid—as in the “taxes” that warlords often demand from international relief agencies—they are actually given an incentive to commit further atrocities.
A “politics of empathy” doesn’t provide much clarity in the public sphere, either. Typically, political disputes involve a disagreement over whom we should empathize with. Liberals argue for gun control, for example, by focussing on the victims of gun violence; conservatives point to the unarmed victims of crime, defenseless against the savagery of others.
On many issues, empathy can pull us in the wrong direction. The outrage that comes from adopting the perspective of a victim can drive an appetite for retribution....In one study, conducted by Jonathan Baron and Ilana Ritov, people were asked how best to punish a company for producing a vaccine that caused the death of a child. Some were told that a higher fine would make the company work harder to manufacture a safer product; others were told that a higher fine would discourage the company from making the vaccine, and since there were no acceptable alternatives on the market the punishment would lead to more deaths. Most people didn’t care; they wanted the company fined heavily, whatever the consequence.
There’s a larger pattern here. Sensible policies often have benefits that are merely statistical but victims who have names and stories. Consider global warming—what Rifkin calls the “escalating entropy bill that now threatens catastrophic climate change and our very existence.” As it happens, the limits of empathy are especially stark here. Opponents of restrictions on CO2 emissions are flush with identifiable victims—all those who will be harmed by increased costs, by business closures. The millions of people who at some unspecified future date will suffer the consequences of our current inaction are, by contrast, pale statistical abstractions.
Moral judgment entails more than putting oneself in another’s shoes. “The decline of violence may owe something to an expansion of empathy,” the psychologist Steven Pinker has written, “but it also owes much to harder-boiled faculties like prudence, reason, fairness, self-control, norms and taboos, and conceptions of human rights.” A reasoned, even counter-empathetic analysis of moral obligation and likely consequences is a better guide to planning for the future than the gut wrench of empathy.
Newtown, in the wake of the Sandy Hook massacre, was inundated with so much charity that it became a burden. More than eight hundred volunteers were recruited to deal with the gifts that were sent to the city—all of which kept arriving despite earnest pleas from Newtown officials that charity be directed elsewhere....Meanwhile—just to begin a very long list—almost twenty million American children go to bed hungry each night, and the federal food-stamp program is facing budget cuts of almost twenty per cent.
Such are the paradoxes of empathy. The power of this faculty has something to do with its ability to bring our moral concern into a laser pointer of focussed attention. If a planet of billions is to survive, however, we’ll need to take into consideration the welfare of people not yet harmed—and, even more, of people not yet born. They have no names, faces, or stories to grip our conscience or stir our fellow-feeling. Their prospects call, rather, for deliberation and calculation. Our hearts will always go out to the baby in the well; it’s a measure of our humanity. But empathy will have to yield to reason if humanity is to have a future.
Blog Categories:
emotion,
evolutionary psychology,
mirror neurons,
social cognition
Tuesday, May 21, 2013
Transferring from Google Reader to Feedly
I've just finished editing and culling the "Other Mind Blogs" list in the right column of this blog. If you are now getting the feeds of any of these or Deric's MindBlog from Google Reader, which shuts down on July 1, they can all be automatically transferred to the Feedly reader at Feedly.com. The search box at the upper right corner of the Feedly page lets you enter URLS of further blogs or news sources you wish to follow. (For a more thorough listing of options, see my March 26 post.)
Monday, May 20, 2013
On well-being - An orgy of good energy last week in Madison, Wisconsin.
In spite of slightly flippant title for this post, I really do believe this is good stuff.
The Dali Lama paid a two day visit to Madison Wisconsin last week, as part of his current world tour “Change Your Mind, Change The World,” speaking at a number of different venues (all under high security screening) under the sponsorship of the Center for Investigating Healthy Minds and the Global Health Institute, both at the University of Wisconsin-Madison. My colleague Richard Davidson, who was central in arranging his visit, is doing an amazing job of bringing to the general public neuroscientific and psychological insight into well-being and happiness. (side note: Davidson contributed to a brain imaging seminar I organized for the graduate Neuroscience program in the 1980’s.) An example his public outreach is this recent piece in The Huffington Post.
The point that I find most compelling, and it certainly resonates with my own experience, is the hard evidence that kindness and generosity are innate human predispositions whose exercise is more effective in promoting a sense of well being than explicitly self-serving behaviors. (Of course, this message has been a component of the major religious traditions for thousands of years.) There is accumulating evidence that kind and generous behavior reduces inflammatory chemistry in our bodies.
I have used the tag ‘happiness’ and 'mindfulness' (in the left column) to mark numerous posts on well-being over the past seven years. Right now my queue of potential posts in this area has more items that I will ever get to individually. So...I thought I would just list a few of them for MindBlog readers who might wish to check some of them out:
On happiness, from the New York Times Opinionator column.
A 75-year Harvard Study's finding on what it takes to live a happy life.
A brief New York Times piece on mindfulness.
How your mind wandering robs you of happiness. (also, enter ‘mind wandering’ in the blog’s search box)
Is giving the secret to getting ahead.
On money and well being.
The point that I find most compelling, and it certainly resonates with my own experience, is the hard evidence that kindness and generosity are innate human predispositions whose exercise is more effective in promoting a sense of well being than explicitly self-serving behaviors. (Of course, this message has been a component of the major religious traditions for thousands of years.) There is accumulating evidence that kind and generous behavior reduces inflammatory chemistry in our bodies.
I have used the tag ‘happiness’ and 'mindfulness' (in the left column) to mark numerous posts on well-being over the past seven years. Right now my queue of potential posts in this area has more items that I will ever get to individually. So...I thought I would just list a few of them for MindBlog readers who might wish to check some of them out:
On happiness, from the New York Times Opinionator column.
A 75-year Harvard Study's finding on what it takes to live a happy life.
A brief New York Times piece on mindfulness.
How your mind wandering robs you of happiness. (also, enter ‘mind wandering’ in the blog’s search box)
Is giving the secret to getting ahead.
On money and well being.
Saturday, May 18, 2013
On continuing MindBlog - Drawing personal structure from sampling the digital stream.
The responses in comments and emails to my ‘scratching my head about mindblog’ post are telling me that my small contributions are valued, with some making it part of the ritual that structures their lives. So, I guess I should listen to that rather than fretting about adding to the digital stream that threatens to overwhelm us all.
We all want to understand how our show is run, what is going on with the little grey cells between our ears (and of course, we would like it run it better). We want to ‘see’ in addition to just ‘being.’ Indeed, this distinction is one of the most central ones I have been making through the course of over three thousand posts. It can be recast in numerous guises, such as being a moral agent in addition a moral patient or between third and first person self construals.
I feel like the recent disjunctive break in generating Deric’s MindBlog - occasioned by a two week return to my former world of vision research - has been a useful one for me. (I will mention, by the way, that I was gratified a the recent vision meeting I attended when several doctoral and postdoctoral students told me that they look back on their time in my laboratory as one of the best in their lives - a time when they were given structure and support, and also given freedom to grow the beginnings of their future independent professional selves.)
I’ve kept a journal since 1974, when I was into gestalt therapy, transactional analysis, and trips to Esalen to learn massage, attend workshops, and commune with the Monarch butterflies and whales of the Big Sur. That journal started to mark entries on psychology and mind with a tag (*mind), that I could search for. My reading on mind and brain grew out of the cellular neurobiology course I started with Julius Adler and then Tony Stretton in 1970, and it formed a parallel track alongside my vision research laboratory work that finally resulted in a new course, The Biology of Mind, in 1994, and the book “Biology of Mind” of 1999 that grew out of its lecture notes. A number of lectures and web essays in the early 2000’s led to the startup of this MindBlog in February of 2006. Thinking about this stuff is how I have structured my life for over 40 years, and I realize that giving that up would be the same as giving up my self.
So..... I guess MindBlog, in some form, isn’t going away.
Wednesday, May 15, 2013
Deric’s MindBlog spends time in the past...in the future?
The past: I’ve been spending the past two weeks in a former life. I was in Seattle last week to attend the annual meeting of ARVO (Assoc. for Research in Vision and Opthalmology), at which my last postdoc, Vadim Arshavsky, was awarded the Proctor Prize. The graphic in this post is from a lecture I just gave on Tuesday to the final seminar this term of the McPherson Eye Research Institute here at U.W., describing the contributions of my laboratory (from 1968 to 1998) to understanding how light changes into a nerve signal in our eyes. (The talk is posted here.)
The future: I’m scratching my head about how (maybe whether?) to continue MindBlog. It has had a good run since Feb. of 2006, and I'm kind of wondering if I should withdraw - as I did from the vision field - while I’m ahead, or at least cut back to less frequent, more thoughtful, posts…. I’m a bit dissatisfied that many of the posts are essentially expanded tweets, passing on the link and abstract of an article I find interesting. I think this is lazy, but I do get ‘thank you’ emails for pointing out something that reader X is interested in. A downside is that the time I take scanning journals and chaining myself to the daily post regime makes it difficult for me to settle into deeper development of a few topics. It also competes with the increasing amount of time I am spending on classical piano performance. I will be curious to see whether these rambling comments elicit any responses from the current 2,500 subscribers to MindBlog’s RSS feed or ~1,100 twitter followers.
Monday, May 06, 2013
MindBlog in Seattle this week - hiatus in posts
There will be a hiatus in MindBlog posts for awhile.
I'm spending this week at an ARVO (Association for Research in Vision and Opthalmology) meeting where a protege of mine, Vadim Arshavsky, who I brought to my lab from the former USSR for post-doctoral training, is being given the field's Proctor Award for work done (mainly after leaving my laboratory) on understanding how the nerve signal initiated by a flash of light in our eyes is rapidly turned off.
Friday, May 03, 2013
Riding other people's coattails.
Another interesting bit from Psychological Science:
Two laboratory experiments and one dyadic study of ongoing relationships of romantic partners examined how temporary and chronic deficits in self-control affect individuals’ evaluations of other people. We suggest that when individuals lack self-control resources, they value such resources in other people. Our results support this hypothesis: We found that individuals low (but not high) in self-control use information about other people’s self-control abilities when judging them, evaluating other people with high self-control more positively than those with low self-control. In one study, participants whose self-control was depleted preferred people with higher self-control, whereas nondepleted participants did not show this preference. In a second study, we conceptually replicated this effect while using a behavioral measure of trait self-control. Finally, in a third study we found individuals with low (but not high) self-control reported greater dependence on dating partners with high self-control than on those with low self-control. We theorize that individuals with low self-control may use interpersonal relationships to compensate for their lack of personal self-control resources.
Thursday, May 02, 2013
Willpower and Abundance - The case for less.
I wanted to pass on some clips from Tim Wu's sane commentary in The New Republic on the recent Diamandis and Kotler book "Abundance: The Future Is Better Than You Think.":
“The future is better than you think” is the message of Peter Diamandis’s and Steven Kotler’s book. Despite a flat economy and intractable environmental problems, Diamandis and his journalist co-author are deeply optimistic about humanity’s prospects. “Technology,” they say, “has the potential to significantly raise the basic standards of living for every man, woman, and child on the planet.... Abundance for all is actually within our grasp.”
Optimism is a useful motivational tool, and I see no reason to argue with Diamandis about the benefits of maintaining a sunny disposition...The unhappy irony is that Diamandis prescribes a program of “more” exactly at a point when a century of similar projects have begun to turn on us. To be fair, his ideas are most pertinent to the poorer parts of the world, where many suffer terribly from a lack of the basics. But in the rich and semi-rich parts of the world, it is a different story. There we are starting to see just what happens when we reach surplus levels across many categories of human desire, and it isn’t pretty. The unfortunate fact is that extreme abundance—like extreme scarcity, but in different ways—can make humans miserable. Where the abundance project has been truly successful, it has created a new host of problems that are now hitting humanity…
The worldwide obesity epidemic is our most obvious example of this “flip” from problems of scarcity to problems of surplus…There is no single cause for obesity, but the sine qua non for it is plenty of cheap, high-calorie foods. And such foods, of course, are the byproduct of our marvelous technologies of abundance, many of them celebrated in Diamandis’s book. They are the byproducts of the “Green Revolution,” brilliant techniques in industrial farming and the genetic modification of crops. We have achieved abundance in food, and it is killing us.
Consider another problem with no precise historical equivalent: “information overload.”…phrases such as “Internet addiction” describe people who are literally unable to stop consuming information even though it is destroying their lives…many of us suffer from milder versions of information overload. Nicolas Carr, in The Shallows, made a persuasive case that the excessive availability of information has begun to re-program our brains, creating serious issues for memory and attention span. Where people were once bored, we now face too many entertainment choices, creating a strange misery aptly termed “the paradox of choice” by the psychologist Barry Schwartz. We have achieved the information abundance that our ancestors craved, and it is driving us insane.
This very idea that too much of what we want can be a bad thing is hard to accept…But in today’s richer world, if you are overweight, in debt, and overwhelmed, there is no one to blame but yourself. Go on a diet, stop watching cable, and pay off your credit card—that’s the answer. In short, we think of scarcity problems as real, and surplus problems as matters of self-control…That may account for the current popularity of books designed to help readers control themselves. The most interesting among them is Willpower: Rediscovering the Greatest Human Strength, by Roy Baumeister and John Tierney.
The book’s most profound sections describe a phenomenon that they call “ego depletion,” a state of mental exhaustion where bad decisions are made. It turns out that being forced to make constant decisions is what causes ego depletion. So if willpower is a muscle, making too many decisions in one day is the equivalent of blowing out your hamstrings with too many squats…they recommend avoiding situations that cause ego-depletion altogether. And here is where we find the link between Abundance and Willpower…Over the last century, mainly through the abundance project, we have created a world where avoiding constant decisions is nearly impossible. We have created environments that are designed to destroy our powers of self-control by creating constant choices among abundant options. [We have] a negative feedback loop: we have more than ever, and therefore need more self-control than ever, but the abundance we’ve created destroys our ability to resist. It is a setup that Sisyphus might have actually envied.
…it is increasingly the duty of the technology industry and the technologists to take seriously the challenge of human overload, and to give it as much attention as the abundance project. It is the first great challenge for post-scarcity thinkers…So advanced are our technological powers that we will be increasingly trying to create access to abundance and to limit it at the same time. Sometimes we must create both the thesis and the antithesis to go in the right direction. We have spent the last century creating an abundance that exceeds any human scale, and now technologists must turn their powers to controlling our, or their, creation.
Blog Categories:
culture/politics,
futures,
happiness,
technology
Wednesday, May 01, 2013
Overearning
Here is an interesting study from Hsee et al on our tendency to keeping working to earn more than we need for happiness, at the expense of that happiness.
Their abstract:
High productivity and high earning rates brought about by modern technologies make it possible for people to work less and enjoy more, yet many continue to work assiduously to earn more. Do people overearn—forgo leisure to work and earn beyond their needs? This question is understudied, partly because in real life, determining the right amount of earning and defining overearning are difficult. In this research, we introduced a minimalistic paradigm that allows researchers to study overearning in a controlled laboratory setting. Using this paradigm, we found that individuals do overearn, even at the cost of happiness, and that overearning is a result of mindless accumulation—a tendency to work and earn until feeling tired rather than until having enough. Supporting the mindless-accumulation notion, our results show, first, that individuals work about the same amount regardless of earning rates and hence are more likely to overearn when earning rates are high than when they are low, and second, that prompting individuals to consider the consequences of their earnings or denying them excessive earnings can disrupt mindless accumulation and enhance happiness.
And, their description of the paradigm used:
Participants are tested individually while seated at a table in front of a computer and wearing a headset. The procedure consists of two consecutive phases, each lasting 5 min. In Phase I, the participant can relax and listen to music (mimicking leisure) or press a key to disrupt the music and listen to a noise (mimicking work). For every certain number of times the participant listens to the noise (e.g., 20 times), he or she earns 1 chocolate; the computer keeps track and shows how many chocolates the participant has earned. The participant can only earn (not eat) the chocolates in Phase I and can only eat (and not earn more of) the chocolates in Phase II. The participant does not need to eat all of the earned chocolates in Phase II, but if any remain, they must be left on the table at the end of the study. Participants learn about these provisions in advance and are informed that they can decide how many chocolates to earn in Phase I and how many to eat in Phase II, and that their only objective is to make themselves as happy as possible during the experiment.
Our paradigm simulates a microcosmic life with a fixed life span; in the first half, one chooses between leisure and labor (earning), and in the second half, one consumes one’s earnings and may not bequeath them to others. In designing the paradigm, our priority was minimalism and controllability rather than realism and external validity. The paradigm was inspired by social scientists’ approaches to investigating complex real-world issues, such as unselfish motives, using minimalistic simulations, such as the ultimatum game. These simulations involve contrived features—for example, players cannot learn each other’s identities and need not worry about reputations—but such features are important because they allow researchers to control for normative reasons for unselfish behaviors and test for pure, unselfish motives. Likewise, our paradigm also involves contrived features—for example, rewards are chocolates rather than money, and participants cannot take their rewards from the lab—but these features are crucial for us to control for normative reasons for overearning effects and test for pure overearning tendencies.
Tuesday, April 30, 2013
The slippery slope of fear
LeDoux makes some useful comments on confusion one encounters in studies of fear, especially involving the amygdala, a clip:
‘Fear’ is used scientifically in two ways, which causes confusion: it refers to conscious feelings and to behavioral and physiological responses...As long as the term ‘fear’ is used interchangeably to describe both feelings and brain/bodily responses elicited by threats, confusion will continue. Restricting the scientific use of the term ‘fear’ to its common meaning and using the less-loaded term, ‘threat-elicited defense responses’, for the brain/body responses yields a language that more accurately reflects the way the brain evolved and works, and allows the exploration of processes in animal brains that are relevant to human behavior and psychiatric disorders without assuming that the complex constellation of states that humans refer to by the term fear are also consciously experienced by other animals. This is not a denial of animal consciousness, but a call for researchers not to invoke animal consciousness to explain things that do not involve consciousness in humans.
Subscribe to:
Posts (Atom)