Sunday, June 09, 2013

Visions of our high-tech future: Julian Assange, Jaron Lanier, et al. on Google, Siren servers and the banality of ‘Don’t Be Evil’

The book by Google's Eric Schmidt and Jared Cohen, "The New Digital Age" is a rosy scenario of our high-tech future that many have found a bit creepy and chilling. Since our techie surround will anticipate and take care of our every movement, it seems like we can just sign off and go along for the ride (turning in mental vegetables in the process? …and letting the rule of 'use it or lose it' do it's work on our poor brains?). Also, is it more than a coincidence that at roughly the same time as Schmidt's messianic book is appearing, the movie “The Internship,” a two-hour commercial for GoogleWorld masquerading as an aspirational buddy comedy, also appears in the movie theaters? (You might note this caustic review of the movie.)

Trying to set aside my bias, generated by extensive negative press comments on the behaviors of Wiki-Leaks' Julian Assange, I found his piece in the New York Times on the Schmidt and Cohen book to have some savory and choice screeds. A partial sampling:
“The New Digital Age” is, beyond anything else, an attempt by Google to position itself as America’s geopolitical visionary — the one company that can answer the question “Where should America go?” It is not surprising that a respectable cast of the world’s most famous warmongers has been trotted out to give its stamp of approval to this enticement to Western soft power. The acknowledgments give pride of place to Henry Kissinger, who along with Tony Blair and the former C.I.A. director Michael Hayden provided advance praise for the book.
…“Progress” is driven by the inexorable spread of American consumer technology over the surface of the earth. Already, every day, another million or so Google-run mobile devices are activated. Google will interpose itself, and hence the United States government, between the communications of every human being not in China (naughty China). Commodities just become more marvelous; young, urban professionals sleep, work and shop with greater ease and comfort; democracy is insidiously subverted by technologies of surveillance, and control is enthusiastically rebranded as “participation”; and our present world order of systematized domination, intimidation and oppression continues, unmentioned, unafflicted or only faintly perturbed.
This book is a balefully seminal work in which neither author has the language to see, much less to express, the titanic centralizing evil they are constructing. “What Lockheed Martin was to the 20th century,” they tell us, “technology and cybersecurity companies will be to the 21st.” Without even understanding how, they have updated and seamlessly implemented George Orwell’s prophecy. If you want a vision of the future, imagine Washington-backed Google Glasses strapped onto vacant human faces — forever. Zealots of the cult of consumer technology will find little to inspire them here, not that they ever seem to need it. But this is essential reading for anyone caught up in the struggle for the future, in view of one simple imperative: Know your enemy.
If you want to read what I think is one of the best articles I have seen so far on the unfortunate consequences of our digital universe and possible cures, check out Jaron Lanier's article "Fixing the Digital Economy." It describes the concentration of power and income in the small sliver of the population that designs and runs the massive servers (Siren servers) that analyze different sectors of our lives to minimize their risk and maximize their profits.
Even friendly, consumer-facing Siren Servers ultimately depend on spreading costs to the larger society. Siren Servers can function profitably only if people aren’t paid for the data that is used to calculate their statistical schemes.
Siren Servers drive apart our identities as consumers and workers. In some cases, causality is apparent: free music downloads are great but throw musicians out of work. Free college courses are all the fad, but tenured professorships are disappearing. Free news proliferates, but money for investigative and foreign reporting is drying up. One can easily see this trend extending to the industries of the future, like 3-D printing and renewable energy.
Lanier suggests that we need to nurture a middle class that can thrive even in a highly automated society. One approach:
Institute a universal micropayment system. Keep track of where information came from. Pay people when information that exists because they exist turns out to be valuable, no matter what kind of information is involved or whether a person intended to provide it or not. Let the price be determined by markets.
Person-to-person information markets might lead to a simpler and clearer online world. Because our information systems are designed to initially forget who provided information, services like Google and Bing must constantly scrape the global network to reconstitute the context of data. Siren Servers know who links to your data, but you don’t.
EVEN today’s titanic Siren Servers would benefit from a more monetized information economy, because it would be a healthier-growing economy. The information economy cannot exhibit the long-term growth it ought to if information coming from ordinary people is forever declared to be off the books.
Skeptics sometimes reveal hidden and unfounded wells of elitism. These surface in comments like: “Most people wouldn’t contribute very much.” But there are already empirical hints to counter such pessimism.
In networks with a central point of control, like YouTube or the Apple Store, we do see a Horatio Alger pattern in the distribution of outcomes, where there are very few viable winners and an unbounded number of hopefuls. But in more directly and thickly connected networks like Facebook, we see people typically exposed to a large number of other people, rather than just a few stars. Therefore, if Facebook users paid one another, they would see a less elite distribution of economic benefits.
Another potential benefit of monetized information is to balance the power of government. When information is free, there is no cost to gathering information about citizens. I would like the government, or anyone else, to pay each person each time that person is tracked by a camera. The government should be able to use cameras for security purposes, but in a limited, not unbounded way. Similarly, candidates should not be able to win elections by having the best Siren Servers, but that’s only a problem if the information is free. Citizens should not lose the power of the purse.
As a final note, this Douthat piece regarding the recently revealed NSA snooping on citizens mentions the fact that the problem isn’t that the Internet has been penetrated by the surveillance state; it’s that the Internet, in effect, is a surveillance state.

Saturday, June 08, 2013

Penis size and male attractiveness - the most read article in The Proceedings of the National Academy!

I finally had to pass this on.... When I check out the table of contents for new issues of the Proceedings of the National Academy of Science, the right hand column of the page lists most read and most cited articles. For weeks I've been noting that the most read article is "Penis size interacts with body shape and height to influence male attractiveness." I've been trying to avoid it, assuming another evolutionary psychology fairy tale...but, I did have a look. To not deprive MindBlog readers of this gem, I pass on the abstract and one illustration:
Compelling evidence from many animal taxa indicates that male genitalia are often under postcopulatory sexual selection for characteristics that increase a male’s relative fertilization success. There could, however, also be direct precopulatory female mate choice based on male genital traits. Before clothing, the nonretractable human penis would have been conspicuous to potential mates. This observation has generated suggestions that human penis size partly evolved because of female choice. Here we show, based upon female assessment of digitally projected life-size, computer-generated images, that penis size interacts with body shape and height to determine male sexual attractiveness. Positive linear selection was detected for penis size, but the marginal increase in attractiveness eventually declined with greater penis size (i.e., quadratic selection). Penis size had a stronger effect on attractiveness in taller men than in shorter men. There was a similar increase in the positive effect of penis size on attractiveness with a more masculine body shape (i.e., greater shoulder-to-hip ratio). Surprisingly, larger penis size and greater height had almost equivalent positive effects on male attractiveness. Our results support the hypothesis that female mate choice could have driven the evolution of larger penises in humans. More broadly, our results show that precopulatory sexual selection can play a role in the evolution of genital traits.


Figures representing the most extreme height, shoulder-to-hip ratio, and penis size (±2 SD) (Right and Left) in comparison with the average (Center figure) trait values.

Friday, June 07, 2013

We can learn new information during sleep.

Arzi et al. have devised a nice demonstration of how we can learn new information during our sleep. They paired pleasant and unpleasant odors with different tones during sleep, and measured the subjects’ sniffs to tones alone when they were awake. Tones associated with pleasant smells produced stronger sniffs, and tones associated with disgusting smells produced weaker sniffs, despite the subjects’ lack of awareness of the learning process. The abstract:
During sleep, humans can strengthen previously acquired memories, but whether they can acquire entirely new information remains unknown. The nonverbal nature of the olfactory sniff response, in which pleasant odors drive stronger sniffs and unpleasant odors drive weaker sniffs, allowed us to test learning in humans during sleep. Using partial-reinforcement trace conditioning, we paired pleasant and unpleasant odors with different tones during sleep and then measured the sniff response to tones alone during the same nights' sleep and during ensuing wake. We found that sleeping subjects learned novel associations between tones and odors such that they then sniffed in response to tones alone. Moreover, these newly learned tone-induced sniffs differed according to the odor pleasantness that was previously associated with the tone during sleep. This acquired behavior persisted throughout the night and into ensuing wake, without later awareness of the learning process. Thus, humans learned new information during sleep.

Thursday, June 06, 2013

Childhood self-control predicts health, wealth, and public safety

An international collaboration between researchers at universities in the US, UK, Canada, and New Zealand has generated this study, which speaks for itself (The participants are members of the Dunedin Multidisciplinary Health and Development Study, which tracks the development of 1,037 individuals born in 1972–1973 in Dunedin, New Zealand.) :
Policy-makers are considering large-scale programs aimed at self-control to improve citizens’ health and wealth and reduce crime. Experimental and economic studies suggest such programs could reap benefits. Yet, is self-control important for the health, wealth, and public safety of the population? Following a cohort of 1,000 children from birth to the age of 32 y, we show that childhood self-control predicts physical health, substance dependence, personal finances, and criminal offending outcomes, following a gradient of self-control. Effects of children's self-control could be disentangled from their intelligence and social class as well as from mistakes they made as adolescents. In another cohort of 500 sibling-pairs, the sibling with lower self-control had poorer outcomes, despite shared family background. Interventions addressing self-control might reduce a panoply of societal costs, save taxpayers money, and promote prosperity.



Self-control gradient. Children with low self-control had poorer health (A), more wealth problems (B), more single-parent child rearing (C), and more criminal convictions (D) than those with high self-control.

Wednesday, June 05, 2013

MindBlog starts up some summer music - a Poulenc Valse

Calling it summer music is being optimistic... it is still very chilly in Madison. I'm starting to select some pieces for an early fall musical at my Twin Valley home in Middleton, WI. This Poulenc valse is fun and bouncy, and gives me an excuse to relearn the techie side of mixing good quality audio with video.

The chemistry of protecting our brains by fasting.

Actually, I'm making a big assumption in the post title... namely that the results obtained by Gräff et al. in mice would extrapolate to similar finding in the human brain. In several animal models, a reduced consumption of calories seems to protect against cognitive deficits such as memory loss, in addition to acting on many different cell types and tissues to slow down aging. They found that caloric restriction effectively delays the onset of neurodegeneration and preserves structural and functional synaptic plasticity as well as memory capacities. Fasting activates the expression and activity of the nicotinamide adenine dinucleotide (NAD)–dependent protein deacetylase SIRT1, a known promoter of neuronal life span. (A deacetylase is an enzyme that cleaves acetate groups - think acetic acid or vinegar - from their attachment to proteins.) Surprisingly, this effect of reduced consumption of calories is mimicked by a small-molecule SIRT1-activating compound. (Just in case you were curious, the compound is SRT3657 [tertα-butyl 4-((2-(2-(6-(2-(tert-butoxycarbonyl(methyl)amino)ethylamino)-2-butylpyrimidine-4- carboxamido)phenyl)thiazolo[5,4-b]pyridin-6-yl)methoxy)piperidine-1-carboxylate])!! Mice treated with this substance recapitulated the beneficial effects of caloric restriction against neurodegeneration-associated pathologies. If this mechanism also applies to humans, SIRT1 may represent an appealing pharmacological target against neurodegeneration. Here is the abstract:
Caloric restriction (CR) is a dietary regimen known to promote lifespan by slowing down the occurrence of age-dependent diseases. The greatest risk factor for neurodegeneration in the brain is age, from which follows that CR might also attenuate the progressive loss of neurons that is often associated with impaired cognitive capacities. In this study, we used a transgenic mouse model that allows for a temporally and spatially controlled onset of neurodegeneration to test the potentially beneficial effects of CR. We found that in this model, CR significantly delayed the onset of neurodegeneration and synaptic loss and dysfunction, and thereby preserved cognitive capacities. Mechanistically, CR induced the expression of the known lifespan-regulating protein SIRT1, prompting us to test whether a pharmacological activation of SIRT1 might recapitulate CR. We found that oral administration of a SIRT1-activating compound essentially replicated the beneficial effects of CR. Thus, SIRT1-activating compounds might provide a pharmacological alternative to the regimen of CR against neurodegeneration and its associated ailments.

Monday, June 03, 2013

Long-term improvement of brain function and cognition with brain stimulation and cognitive training.

A group of collaborators from the University of Oxford and Innsbruck Medical University have published an observation that simple transcranial random noise stimulation (TRNS) of the bilateral (both sides of the brain) dorsolateral prefrontal cortex (DLPFC) applied during cognitive training over five days causes improvement in learning and performance of complex arithmetic tasks (both calculation and drill leaning) that still persist on testing 6 months later. This correlates with long lasting oxygenated blood flow changes measured by near-infrared spectroscopy that suggests more efficient neurovascular coupling within the left DLPFC. Here is their complete abstract:
Noninvasive brain stimulation has shown considerable promise for enhancing cognitive functions by the long-term manipulation of neuroplasticity. However, the observation of such improvements has been focused at the behavioral level, and enhancements largely restricted to the performance of basic tasks. Here, we investigate whether transcranial random noise stimulation (TRNS) can improve learning and subsequent performance on complex arithmetic tasks. TRNS of the bilateral dorsolateral prefrontal cortex (DLPFC), a key area in arithmetic , was uniquely coupled with near-infrared spectroscopy (NIRS) to measure online hemodynamic responses within the prefrontal cortex. Five consecutive days of TRNS-accompanied cognitive training enhanced the speed of both calculation- and memory-recall-based arithmetic learning. These behavioral improvements were associated with defined hemodynamic responses consistent with more efficient neurovascular coupling within the left DLPFC. Testing 6 months after training revealed long-lasting behavioral and physiological modifications in the stimulated group relative to sham controls for trained and nontrained calculation material. These results demonstrate that, depending on the learning regime, TRNS can induce long-term enhancement of cognitive and brain functions. Such findings have significant implications for basic and translational neuroscience, highlighting TRNS as a viable approach to enhancing learning and high-level cognition by the long-term modulation of neuroplasticity.
For those of you who might well ask "How exactly is TRNS done?" here is a clip from their experimental procedures section. The photograph suggests a rather imposing device!:
Subjects received TRNS while performing the learning task each day. Two electrodes (5 cm × 5 cm) were positioned over areas of scalp corresponding to the DLPFC (F3 and F4, identified in accordance with the international 10-20 EEG procedure; see the figure). Electrodes were encased in saline-soaked synthetic sponges to improve contact with the scalp and avoid skin irritation. Stimulation was delivered by a DC-Stimulator-Plus device (DC-Stimulator-Plus, neuroConn). Noise in the high-frequency band (100–600Hz) was chosen as it elicits greater neural excitation than lower frequency stimulation. For the TRNS group, current was administered for 20 min, with 15 s increasing and decreasing ramps at the beginning and end, respectively, of each session of stimulation. In the sham group current was applied for 30 s after upward ramping and then terminated.

Thursday, May 30, 2013

When more support is less....

Finkel and Fitzsimons, whose work I mentioned in a post several years ago, do a review of studies showing that the children of parents who generously finance and regulate in detail their education ("helicopter parenting') make worse grades and feel less satisfied with their lives.
It seems that certain forms of help can dilute recipients’ sense of accountability for their own success. The college student might think: If Mom and Dad are always around to solve my problems, why spend three straight nights in the library during finals rather than hanging out with my friends?
They reference their previous work (see MindBlog link above) showing that this effect generalizes to many 'helping' situations.
Women who thought about how their spouse was helpful with their health and fitness goals became less motivated to work hard to pursue those goals: relative to the control group, these women planned to spend one-third less time in the coming week pursuing their health and fitness goals.
....the problem: how can we help our children (and our spouses, friends and co-workers) achieve their goals without undermining their sense of personal accountability and motivation to achieve them?...The answer, research suggests, is that our help has to be responsive to the recipient’s circumstances: it must balance their need for support with their need for competence. We should restrain our urge to help unless the recipient truly needs it, and even then, we should calibrate it to complement rather than substitute for the recipient’s efforts.
(I like to think that this would be a good description of how I ran my research laboratory, training graduate and post-doctoral students, for 30 years.) A final clip:
...providing help is most effective under a few conditions: when the recipient clearly needs it, when our help complements rather than replaces the recipient’s own efforts, and when it makes recipients feel that we’re comfortable having them depend on us.
So yes, by all means, parents, help your children. But don’t let your action replace their action. Support, don’t substitute. Your children will be more likely to achieve their goals — and, who knows, you might even find some time to get your own social life back on track.

Giant, Glowing Plastic Brain on Wheels

Even though I'm usually a curmudgeon about brain hype,  Obama's Brain initiative, etc., I have to admire the enthusiasm and persistence of CUNY college senior Tyler Alterman, who as his senior thesis project, is trying to get a cognitive science lab on wheels on the road. He hopes that the mobile lab, dubbed The Think Tank, will help close the gender and race gap in STEM skills (science, technology, engineering and math), that are the key to good jobs, through hands-on psych and neuroscience learning at schools and museums.   He has raised $32,000 from crowd funding, and hopes to raise the final $20,000 needed to put the mobile lab on the streets, in part through a public gala on June 18 at the Macaulay Honor College of CUNY.

Wednesday, May 29, 2013

How we work: 'brain waves' versus modern phrenology

Alexander et al. have analyzed data from magnetoencephalogram (MEG), electroencephalogram (EEG) and electrocorticogram (ECoG), focusing on globally synchronous fields in within-trial evoked brain activity. They quantified several signal components and compared topographies of activation across large-scale cortex. They found the topography of evoked responses was primarily a function of within-trial phase, and within-trial phase topography could be modeled as traveling waves. Traveling waves explained more signal than the trial-averaged phase topography. Here is a edited clip of explanation from Alexander:
The brain can be studied on various scales,..."You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG's (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas."
..."We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain's existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”
Each activity wave in the cerebral cortex is unique. "When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean."

Video: A wave of brain activity measured by the magnetic field it generates externally to the head. The left view of the head is shown on the left side of the image and the right view of the head on the right side of the image. This wave takes about 100 milliseconds to traverse the entire surface of the brain. The travelling wave originates on the lower-left of the head and travels to the lower front-right of the head. Most of the magnetic field shown in this video is generated by brain activity close to the surface of the cortex. The times displayed at the bottom are relative to the subject pressing a button at time zero. The colour scale shows the peak of the wave as hot colours and the trough of the wave as dark colours.

Monday, May 27, 2013

Training our ability to make decisions on uncertain outcomes.

When making decisions, we often retrieve a limited set of items from memory. These retrieved items provide evidence for competing options. For example, a dark cloud may elicit memories of heavy rains, leading one to pack an umbrella instead of sunglasses. Likewise, when viewing an X-ray, a radiologist may retrieve memories of similar X-rays from other patients. Whether or not these other patients have a tumor may provide evidence for or against the presence of a tumor in the current patient. Giguèrea and Love do an interesting study showing how people's ability to make accurate predictions of probabilistic outcomes can be improved if they are trained on an idealized version of a the distribution. They say it in their abstract as clearly as I can:
Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.
Here are some clips from their text:
For probabilistic problems, such as determining whether a tumor is cancerous, whether it will rain, or whether a passenger is a security threat, selectively sampling memory at the time of decision makes it impossible for the learner to overcome uncertainty in the training domain. From a signal-detection perspective, selective sampling from memory results in noisy and inconsistent placement of the criterion across decision trials. Even with a perfect memory for all past experiences, a learner who selectively samples from memory will perform suboptimally on ambiguous category structures

Figure (A) Categories A (red curve) and B (green curve) are probabilistic, overlapping distributions. After experiencing many training items (denoted by the red A and green B letters), an optimal classifier places the decision criterion (dotted line) to maximize accuracy, and will classify all new test items to left of the criterion as A and all items to the right of the criterion as B. (B) Thus, the optimal classifier will always judge item S8 to be an A. In contrast, a model that stochastically and nonexhaustively samples similar items from memory may retrieve the three circled items and classify S8 as a B, which is not the most likely category. This sampling model will never achieve optimal performance when trained on ambiguous category structures. (C) Idealizing the category structures during training such that all items to the left of the criterion are labeled as A and to the right as B (underlined items are idealized) leads to optimal performance for both the optimal classifier and the selective sampling model.

Friday, May 24, 2013

Renewing our brain's ability to make decisions.

Our dopamine neurons, which enable enable our brains to make better choices, based on outcomes, gradually die off as part of the normal aging process.  Chowdhury and colleagues have now found that increasing dopamine levels in the brain of healthy older participants increased the rate with which they learned from rewarding outcomes and changed activity in the striatum, a brain region that supports learning from rewards. To relate brain activity and behavior, they utilized fMRI, diffusion tensor imaging, reinforcement learning tasks, and computational models of behavior. Their data might suggest that some variant of the dopamine therapy used for Parkinson's disease patient, might help older people make decisions. Here is their more technical abstract:
Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA.

Wednesday, May 22, 2013

The limits of empathy

I thought I would follow up the Monday's post on well being, kindness, happiness and all that good stuff by noting a piece on how feel-good energy can lead us astray. Yale psychologist Paul Bloom has done an excellent article in the May 20 issue of the The New Yorker titled “The baby in the well - the limits of empathy.” Well meant feelings and actions of empathy can in some cases be counterproductive and blind us to more remote but statistically much more important hardships. Our evolved ability to feel what others are feeling (see numerous mindblog posts on mirror neurons, etc. ) is applied to very explicit and limited human situations, usually a specific individual (6 year old girl falls in well and nation focuses on watching the rescue) or defined and limited groups (mass shootings at Sandy Hook or Boston Marathon bombing). From Bloom:
In the past three decades, there were some sixty mass shootings, causing about five hundred deaths; that is, about one-tenth of one per cent of the homicides in America. But mass murders get splashed onto television screens, newspaper headlines, and the Web; the biggest ones settle into our collective memory —Columbine, Virginia Tech, Aurora, Sandy Hook. The 99.9 per cent of other homicides are, unless the victim is someone you’ve heard of, mere background noise.
After noting how empathy research is thriving, and several books arguing that more empathy has to be a good thing (with Rifkin, in “The Empathic Civilization” (Penguin), wanting us to make the leap to “global empathic consciousness”), Bloom notes:
This enthusiasm may be misplaced, however. Empathy has some unfortunate features—it is parochial, narrow-minded, and innumerate. We’re often at our best when we’re smart enough not to rely on it......the key to engaging empathy is what has been called “the identifiable victim effect.” As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.”
You can see the effect in the lab. The psychologists Tehila Kogut and Ilana Ritov asked some subjects how much money they would give to help develop a drug that would save the life of one child, and asked others how much they would give to save eight children. The answers were about the same. But when Kogut and Ritov told a third group a child’s name and age, and showed her picture, the donations shot up—now there were far more to the one than to the eight.
In the broader context of humanitarianism, as critics like Linda Polman have pointed out, the empathetic reflex can lead us astray. When the perpetrators of violence profit from aid—as in the “taxes” that warlords often demand from international relief agencies—they are actually given an incentive to commit further atrocities.
A “politics of empathy” doesn’t provide much clarity in the public sphere, either. Typically, political disputes involve a disagreement over whom we should empathize with. Liberals argue for gun control, for example, by focussing on the victims of gun violence; conservatives point to the unarmed victims of crime, defenseless against the savagery of others.
On many issues, empathy can pull us in the wrong direction. The outrage that comes from adopting the perspective of a victim can drive an appetite for retribution....In one study, conducted by Jonathan Baron and Ilana Ritov, people were asked how best to punish a company for producing a vaccine that caused the death of a child. Some were told that a higher fine would make the company work harder to manufacture a safer product; others were told that a higher fine would discourage the company from making the vaccine, and since there were no acceptable alternatives on the market the punishment would lead to more deaths. Most people didn’t care; they wanted the company fined heavily, whatever the consequence.
There’s a larger pattern here. Sensible policies often have benefits that are merely statistical but victims who have names and stories. Consider global warming—what Rifkin calls the “escalating entropy bill that now threatens catastrophic climate change and our very existence.” As it happens, the limits of empathy are especially stark here. Opponents of restrictions on CO2 emissions are flush with identifiable victims—all those who will be harmed by increased costs, by business closures. The millions of people who at some unspecified future date will suffer the consequences of our current inaction are, by contrast, pale statistical abstractions.
Moral judgment entails more than putting oneself in another’s shoes. “The decline of violence may owe something to an expansion of empathy,” the psychologist Steven Pinker has written, “but it also owes much to harder-boiled faculties like prudence, reason, fairness, self-control, norms and taboos, and conceptions of human rights.” A reasoned, even counter-empathetic analysis of moral obligation and likely consequences is a better guide to planning for the future than the gut wrench of empathy.
Newtown, in the wake of the Sandy Hook massacre, was inundated with so much charity that it became a burden. More than eight hundred volunteers were recruited to deal with the gifts that were sent to the city—all of which kept arriving despite earnest pleas from Newtown officials that charity be directed elsewhere....Meanwhile—just to begin a very long list—almost twenty million American children go to bed hungry each night, and the federal food-stamp program is facing budget cuts of almost twenty per cent.
Such are the paradoxes of empathy. The power of this faculty has something to do with its ability to bring our moral concern into a laser pointer of focussed attention. If a planet of billions is to survive, however, we’ll need to take into consideration the welfare of people not yet harmed—and, even more, of people not yet born. They have no names, faces, or stories to grip our conscience or stir our fellow-feeling. Their prospects call, rather, for deliberation and calculation. Our hearts will always go out to the baby in the well; it’s a measure of our humanity. But empathy will have to yield to reason if humanity is to have a future.

Tuesday, May 21, 2013

Transferring from Google Reader to Feedly

I've just finished editing and culling the "Other Mind Blogs" list in the right column of this blog.  If you are now getting the feeds of any of these or Deric's MindBlog from Google Reader, which shuts down on July 1,  they can all be automatically transferred to the Feedly reader at Feedly.com.   The search box at the upper right corner of the Feedly page lets you enter URLS of further blogs or news sources you wish to follow. (For a more thorough listing of options, see my March 26 post.)

Monday, May 20, 2013

On well-being - An orgy of good energy last week in Madison, Wisconsin.

In spite of slightly flippant title for this post, I really do believe this is good stuff. The Dali Lama paid a two day visit to Madison Wisconsin last week, as part of his current world tour “Change Your Mind, Change The World,” speaking at a number of different venues (all under high security screening) under the sponsorship of the Center for Investigating Healthy Minds and the Global Health Institute, both at the University of Wisconsin-Madison. My colleague Richard Davidson, who was central in arranging his visit, is doing an amazing job of bringing to the general public neuroscientific and psychological insight into well-being and happiness. (side note: Davidson contributed to a brain imaging seminar I organized for the graduate Neuroscience program in the 1980’s.) An example his public outreach is this recent piece in The Huffington Post.

The point that I find most compelling, and it certainly resonates with my own experience, is the hard evidence that kindness and generosity are innate human predispositions whose exercise is more effective in promoting a sense of well being than explicitly self-serving behaviors. (Of course, this message has been a component of the major religious traditions for thousands of years.) There is accumulating evidence that kind and generous behavior reduces inflammatory chemistry in our bodies.

I have used the tag ‘happiness’ and 'mindfulness' (in the left column) to mark numerous posts on well-being over the past seven years. Right now my queue of potential posts in this area has more items that I will ever get to individually. So...I thought I would just list a few of them for MindBlog readers who might wish to check some of them out:

On happiness, from the New York Times Opinionator column.

A 75-year Harvard Study's finding on what it takes to live a happy life. 

A brief New York Times piece on mindfulness.

How your mind wandering robs you of happiness. (also, enter ‘mind wandering’ in the blog’s search box)

Is giving the secret to getting ahead.

On money and well being.


Saturday, May 18, 2013

On continuing MindBlog - Drawing personal structure from sampling the digital stream.

The responses in comments and emails to my ‘scratching my head about mindblog’ post are telling me that my small contributions are valued, with some making it part of the ritual that structures their lives. So, I guess I should listen to that rather than fretting about adding to the digital stream that threatens to overwhelm us all.
We all want to understand how our show is run, what is going on with the little grey cells between our ears (and of course, we would like it run it better). We want to ‘see’ in addition to just ‘being.’ Indeed, this distinction is one of the most central ones I have been making through the course of over three thousand posts. It can be recast in numerous guises, such as being a moral agent in addition a moral patient or between third and first person self construals.
I feel like the recent disjunctive break in generating Deric’s MindBlog - occasioned by a two week return to my former world of vision research - has been a useful one for me. (I will mention, by the way, that I was gratified a the recent vision meeting I attended when several doctoral and postdoctoral students told me that they look back on their time in my laboratory as one of the best in their lives - a time when they were given structure and support, and also given freedom to grow the beginnings of their future independent professional selves.)
I’ve kept a journal since 1974, when I was into gestalt therapy, transactional analysis, and trips to Esalen to learn massage, attend workshops, and commune with the Monarch butterflies and whales of the Big Sur. That journal started to mark entries on psychology and mind with a tag (*mind), that I could search for. My reading on mind and brain grew out of the cellular neurobiology course I started with Julius Adler and then Tony Stretton in 1970, and it formed a parallel track alongside my vision research laboratory work that finally resulted in a new course, The Biology of Mind, in 1994, and the book “Biology of Mind” of 1999 that grew out of its lecture notes. A number of lectures and web essays in the early 2000’s led to the startup of this MindBlog in February of 2006. Thinking about this stuff is how I have structured my life for over 40 years, and I realize that giving that up would be the same as giving up my self.
So..... I guess MindBlog, in some form, isn’t going away.

Wednesday, May 15, 2013

Deric’s MindBlog spends time in the past...in the future?

The past:  I’ve been spending the past two weeks in a former life. I was in Seattle last week to attend the annual meeting of ARVO (Assoc. for Research in Vision and Opthalmology), at which my last postdoc, Vadim Arshavsky, was awarded the Proctor Prize.   The graphic in this post is from a lecture I just gave on Tuesday to the final seminar this term of the McPherson Eye Research Institute here at U.W., describing the contributions of my laboratory (from 1968 to 1998) to understanding how light changes into a nerve signal in our eyes. (The talk is posted here.)

The future:  I’m scratching my head about how (maybe whether?) to continue MindBlog.  It has had a good run since Feb. of 2006, and I'm kind of wondering if I should withdraw - as I did from the vision field - while I’m ahead, or at least cut back to less frequent, more thoughtful, posts…. I’m a bit dissatisfied that many of the posts are essentially expanded tweets, passing on the link and abstract of an article I find interesting.  I think this is lazy, but I do get ‘thank you’ emails for pointing out something that reader X is interested in.  A downside is that the time I take scanning journals and chaining myself to the daily post regime makes it difficult for me to settle into deeper development of a few topics.  It also competes with the increasing amount of time I am spending on classical piano performance. I will be curious to see whether these rambling comments elicit any responses from the current 2,500 subscribers to MindBlog’s RSS feed or ~1,100 twitter followers.     

Monday, May 06, 2013

MindBlog in Seattle this week - hiatus in posts

There will be a hiatus in MindBlog posts for awhile.
I'm spending this week at an ARVO (Association for Research in Vision and Opthalmology) meeting where a protege of mine, Vadim Arshavsky, who I brought to my lab from the former USSR for post-doctoral training, is being given the field's Proctor Award for work done (mainly after leaving my laboratory) on understanding how the nerve signal initiated by a flash of light in our eyes is rapidly turned off.

Friday, May 03, 2013

Riding other people's coattails.

Another interesting bit from Psychological Science:
Two laboratory experiments and one dyadic study of ongoing relationships of romantic partners examined how temporary and chronic deficits in self-control affect individuals’ evaluations of other people. We suggest that when individuals lack self-control resources, they value such resources in other people. Our results support this hypothesis: We found that individuals low (but not high) in self-control use information about other people’s self-control abilities when judging them, evaluating other people with high self-control more positively than those with low self-control. In one study, participants whose self-control was depleted preferred people with higher self-control, whereas nondepleted participants did not show this preference. In a second study, we conceptually replicated this effect while using a behavioral measure of trait self-control. Finally, in a third study we found individuals with low (but not high) self-control reported greater dependence on dating partners with high self-control than on those with low self-control. We theorize that individuals with low self-control may use interpersonal relationships to compensate for their lack of personal self-control resources.

Thursday, May 02, 2013

Willpower and Abundance - The case for less.

I wanted to pass on some clips from Tim Wu's sane commentary in The New Republic on the recent Diamandis and Kotler book "Abundance: The Future Is Better Than You Think.":
“The future is better than you think” is the message of Peter Diamandis’s and Steven Kotler’s book. Despite a flat economy and intractable environmental problems, Diamandis and his journalist co-author are deeply optimistic about humanity’s prospects. “Technology,” they say, “has the potential to significantly raise the basic standards of living for every man, woman, and child on the planet.... Abundance for all is actually within our grasp.”
Optimism is a useful motivational tool, and I see no reason to argue with Diamandis about the benefits of maintaining a sunny disposition...The unhappy irony is that Diamandis prescribes a program of “more” exactly at a point when a century of similar projects have begun to turn on us. To be fair, his ideas are most pertinent to the poorer parts of the world, where many suffer terribly from a lack of the basics. But in the rich and semi-rich parts of the world, it is a different story. There we are starting to see just what happens when we reach surplus levels across many categories of human desire, and it isn’t pretty. The unfortunate fact is that extreme abundance—like extreme scarcity, but in different ways—can make humans miserable. Where the abundance project has been truly successful, it has created a new host of problems that are now hitting humanity…
The worldwide obesity epidemic is our most obvious example of this “flip” from problems of scarcity to problems of surplus…There is no single cause for obesity, but the sine qua non for it is plenty of cheap, high-calorie foods. And such foods, of course, are the byproduct of our marvelous technologies of abundance, many of them celebrated in Diamandis’s book. They are the byproducts of the “Green Revolution,” brilliant techniques in industrial farming and the genetic modification of crops. We have achieved abundance in food, and it is killing us.
Consider another problem with no precise historical equivalent: “information overload.”…phrases such as “Internet addiction” describe people who are literally unable to stop consuming information even though it is destroying their lives…many of us suffer from milder versions of information overload. Nicolas Carr, in The Shallows, made a persuasive case that the excessive availability of information has begun to re-program our brains, creating serious issues for memory and attention span. Where people were once bored, we now face too many entertainment choices, creating a strange misery aptly termed “the paradox of choice” by the psychologist Barry Schwartz. We have achieved the information abundance that our ancestors craved, and it is driving us insane.
This very idea that too much of what we want can be a bad thing is hard to accept…But in today’s richer world, if you are overweight, in debt, and overwhelmed, there is no one to blame but yourself. Go on a diet, stop watching cable, and pay off your credit card—that’s the answer. In short, we think of scarcity problems as real, and surplus problems as matters of self-control…That may account for the current popularity of books designed to help readers control themselves. The most interesting among them is Willpower: Rediscovering the Greatest Human Strength, by Roy Baumeister and John Tierney.
The book’s most profound sections describe a phenomenon that they call “ego depletion,” a state of mental exhaustion where bad decisions are made. It turns out that being forced to make constant decisions is what causes ego depletion. So if willpower is a muscle, making too many decisions in one day is the equivalent of blowing out your hamstrings with too many squats…they recommend avoiding situations that cause ego-depletion altogether. And here is where we find the link between Abundance and Willpower…Over the last century, mainly through the abundance project, we have created a world where avoiding constant decisions is nearly impossible. We have created environments that are designed to destroy our powers of self-control by creating constant choices among abundant options. [We have] a negative feedback loop: we have more than ever, and therefore need more self-control than ever, but the abundance we’ve created destroys our ability to resist. It is a setup that Sisyphus might have actually envied. 
…it is increasingly the duty of the technology industry and the technologists to take seriously the challenge of human overload, and to give it as much attention as the abundance project. It is the first great challenge for post-scarcity thinkers…So advanced are our technological powers that we will be increasingly trying to create access to abundance and to limit it at the same time. Sometimes we must create both the thesis and the antithesis to go in the right direction. We have spent the last century creating an abundance that exceeds any human scale, and now technologists must turn their powers to controlling our, or their, creation.  

Wednesday, May 01, 2013

Overearning

Here is an interesting study from Hsee et al on our tendency to keeping working to earn more than we need for happiness, at the expense of that happiness.
Their abstract:
High productivity and high earning rates brought about by modern technologies make it possible for people to work less and enjoy more, yet many continue to work assiduously to earn more. Do people overearn—forgo leisure to work and earn beyond their needs? This question is understudied, partly because in real life, determining the right amount of earning and defining overearning are difficult. In this research, we introduced a minimalistic paradigm that allows researchers to study overearning in a controlled laboratory setting. Using this paradigm, we found that individuals do overearn, even at the cost of happiness, and that overearning is a result of mindless accumulation—a tendency to work and earn until feeling tired rather than until having enough. Supporting the mindless-accumulation notion, our results show, first, that individuals work about the same amount regardless of earning rates and hence are more likely to overearn when earning rates are high than when they are low, and second, that prompting individuals to consider the consequences of their earnings or denying them excessive earnings can disrupt mindless accumulation and enhance happiness.
And, their description of the paradigm used:
Participants are tested individually while seated at a table in front of a computer and wearing a headset. The procedure consists of two consecutive phases, each lasting 5 min. In Phase I, the participant can relax and listen to music (mimicking leisure) or press a key to disrupt the music and listen to a noise (mimicking work). For every certain number of times the participant listens to the noise (e.g., 20 times), he or she earns 1 chocolate; the computer keeps track and shows how many chocolates the participant has earned. The participant can only earn (not eat) the chocolates in Phase I and can only eat (and not earn more of) the chocolates in Phase II. The participant does not need to eat all of the earned chocolates in Phase II, but if any remain, they must be left on the table at the end of the study. Participants learn about these provisions in advance and are informed that they can decide how many chocolates to earn in Phase I and how many to eat in Phase II, and that their only objective is to make themselves as happy as possible during the experiment.
Our paradigm simulates a microcosmic life with a fixed life span; in the first half, one chooses between leisure and labor (earning), and in the second half, one consumes one’s earnings and may not bequeath them to others. In designing the paradigm, our priority was minimalism and controllability rather than realism and external validity. The paradigm was inspired by social scientists’ approaches to investigating complex real-world issues, such as unselfish motives, using minimalistic simulations, such as the ultimatum game. These simulations involve contrived features—for example, players cannot learn each other’s identities and need not worry about reputations—but such features are important because they allow researchers to control for normative reasons for unselfish behaviors and test for pure, unselfish motives. Likewise, our paradigm also involves contrived features—for example, rewards are chocolates rather than money, and participants cannot take their rewards from the lab—but these features are crucial for us to control for normative reasons for overearning effects and test for pure overearning tendencies.

Tuesday, April 30, 2013

The slippery slope of fear

LeDoux makes some useful comments on confusion one encounters in studies of fear, especially involving the amygdala, a clip:
‘Fear’ is used scientifically in two ways, which causes confusion: it refers to conscious feelings and to behavioral and physiological responses...As long as the term ‘fear’ is used interchangeably to describe both feelings and brain/bodily responses elicited by threats, confusion will continue. Restricting the scientific use of the term ‘fear’ to its common meaning and using the less-loaded term, ‘threat-elicited defense responses’, for the brain/body responses yields a language that more accurately reflects the way the brain evolved and works, and allows the exploration of processes in animal brains that are relevant to human behavior and psychiatric disorders without assuming that the complex constellation of states that humans refer to by the term fear are also consciously experienced by other animals. This is not a denial of animal consciousness, but a call for researchers not to invoke animal consciousness to explain things that do not involve consciousness in humans.

Monday, April 29, 2013

Lessons learned from a Chaos and Comlexity seminar.

For ~ 15 years I have participated in the weekly Chaos and Complexity seminar at the Univ. of Wisconsin organized by physics professor Clint Sprott, and have given ~5 lectures to the group during that period.  I want to pass on this link to Sprott's summary comments  presented at the final meeting of the spring term on 5/7/2013. Here I would like to pass on his closing comments:
We have heard many speakers over the years make dire predictions, especially regarding the climate and the ecology, but I am more optimistic than most about our future for five fundamental reasons: 1) Negative feedback is at least as common as positive feedback, and it tends to regulate many processes. 2) Most nonlinearities are beneficial, putting inherent limits on the growth of deleterious effects. 3) Complex dynamical systems self-organize to optimize their fitness. 4) Chaotic systems are sensitive to small changes, making prediction difficult, but facilitating control. 5) Our knowledge and technology will continue to advance, meaning that new solutions to problems will be developed as they are needed or, more likely, soon thereafter in response to the need. Whether it's fusion reactors, geoengineering, vastly improved batteries, halting of the aging process, DNA cloning to restore extinct species, or some other game changer, things may get worse before they get better, but humans are enormously ingenious and adaptable and will rise to the challenge of averting disaster.
This is not a prediction that our problems will vanish or an argument for ignoring them. On the contrary, our choices and actions are the means by which society will reorganize to become even better in the decades to follow, albeit surely not a Utopia.

Friday, April 26, 2013

Teleological reasoning about nature: intentional design or relational perspectives?

Ojalehto et al. offer an interesting analysis of assumptions about our reasoning about natural phenomena. Some slightly edited clips from the abstract and paper:
According to the theory of ‘promiscuous teleology’, humans are naturally biased to (mistakenly) construe natural kinds as if they (like artifacts) were intentionally designed ‘for a purpose’ (i.e. clouds are 'for' raining). However, this theory introduces two paradoxes. First, if infants readily distinguish natural kinds from artifacts, as evidence suggests, why do school-aged children erroneously conflate this distinction? Second, if Western scientific education is required to overcome promiscuous teleological reasoning, how can one account for the ecological expertise of non-Western educated, indigenous people? We develop an alternative ‘relational-deictic’ interpretation, proposing that the teleological stance may not index a deep-rooted belief that nature was designed for a purpose, but instead may reflect an appreciation of the perspectival relations among living things and their environments.
A new relational-deictic framework can take into account a rich set of relations and perspectives among natural entities, permitting one to avoid cultural assumptions about the ‘right way’ to conceptualize nature, and identifying the claim for ‘intuitive theism’ as a culturally-infused stance. Kelemen writes that teleological reasoning is a ‘side-effect’ of people's natural inclination to ‘privilege intentional explanation’ and view ‘nature as an intentionally designed artifact.’ The relational-deictic framework outlined here offers a different interpretation: teleological reasoning reflects a tendency to think through perspectival relationships within (socio-ecological) webs of interdependency. On this view, the origins of teleological thinking are social and relational rather than individual and intentional. This has implications for ongoing debates about the primacy of social and relational theories in human development.
The relational-deictic interpretation opens new avenues for research into how people come to understand the natural world and their place within it. Teleological reasoning may not be immature or misguided. Instead, it may reflect young children's ecological perspective-taking abilities and serve as an entry-point for reasoning about socio-ecological systems of living things, rather than reasoning about isolated, abstracted, and essentialized individual kinds

Thursday, April 25, 2013

Brain activity correlating with future antisocial activity.

From Aharoni et al.:
Identification of factors that predict recurrent antisocial behavior is integral to the social sciences, criminal justice procedures, and the effective treatment of high-risk individuals. Here we show that error-related brain activity elicited during performance of an inhibitory task prospectively predicted subsequent rearrest among adult offenders within 4 y of release (N = 96). The odds that an offender with relatively low anterior cingulate activity would be rearrested were approximately double that of an offender with high activity in this region, holding constant other observed risk factors. These results suggest a potential neurocognitive biomarker for persistent antisocial behavior.
A marker, fine, but as a guide to action?  Suggesting more post-incarceration therapeutic efforts with those having lower anterior cingulate activities?

Wednesday, April 24, 2013

Body posture modulates action perception.

From Zimmermann et al:
Recent studies have highlighted cognitive and neural similarities between planning and perceiving actions. Given that action planning involves a simulation of potential action plans that depends on the actor's body posture, we reasoned that perceiving actions may also be influenced by one's body posture. Here, we test whether and how this influence occurs by measuring behavioral and cerebral (fMRI) responses in human participants predicting goals of observed actions, while manipulating postural congruency between their own body posture and postures of the observed agents. Behaviorally, predicting action goals is facilitated when the body posture of the observer matches the posture achieved by the observed agent at the end of his action (action's goal posture). Cerebrally, this perceptual postural congruency effect modulates activity in a portion of the left intraparietal sulcus that has previously been shown to be involved in updating neural representations of one's own limb posture during action planning. This intraparietal area showed stronger responses when the goal posture of the observed action did not match the current body posture of the observer. These results add two novel elements to the notion that perceiving actions relies on the same predictive mechanism as planning actions. First, the predictions implemented by this mechanism are based on the current physical configuration of the body. Second, during both action planning and action observation, these predictions pertain to the goal state of the action.

Tuesday, April 23, 2013

Where our brains compute musical reward.

Yet another fascinating chunk from Zatorre and collaborators:
We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.

Monday, April 22, 2013

Quiet - the world of introverts.

I recently visited my year old grandson in Austin, TX., who turns out to be my opposite on Jerome Kagan's scale of temperamental introversion/extraversion. Like his mother, he is outgoing and gregarious, and wears me out very quickly with his intensity in play activities. Against this background I was struck by reading a book review by Judith Warner of Susan Cain's new book "Quiet", listed by the NY Times as one of the 10 major popular science books of the past year. Some clips from the review:
Too often denigrated and frequently overlooked in a society that’s held in thrall to an “Extrovert Ideal — the omnipresent belief that the ideal self is gregarious, alpha and comfortable in the spotlight,” Cain’s introverts are overwhelmed by the social demands thrust upon them. They’re also underwhelmed by the example set by the voluble, socially successful go-getters in their midst who “speak without thinking,” in the words of a Chinese software engineer whom Cain encounters in Cupertino, Calif.
Many of the self-avowed introverts she meets in the course of this book.. ape extroversion...some fake it well enough to make it, going along to get along in a country that rewards the out­going...Unchecked extroversion — a personality trait Cain ties to ebullience, excitability, dominance, risk-taking, thick skin, boldness and a tendency toward quick thinking and thoughtless action — has actually, she argues, come to pose a real menace of late. The outsize reward-seeking tendencies of the hopelessly ­outer-directed helped bring us the bank meltdown of 2008...she claims....it’s time to establish “a greater balance of power” between those who rush to speak and do and those who sit back and think. Introverts — who, according to Cain, can count among their many virtues the fact that “they’re relatively immune to the lures of wealth and fame” — must learn to “embrace the power of quiet.” And extroverts should learn to sit down and shut up.
Her accounts of introverted kids misunderstood and mishandled by their parents should give pause, for she rightly notes that introversion in children (often incorrectly viewed as shyness) is in some ways threatening to the adults around them. Indeed, in an age when kids are increasingly herded into classroom “pods” for group work, Cain’s insights into the stresses of nonstop socializing for some children are welcome; her advice that parents should choose to view their introverted offspring’s social style with understanding rather than fear is well worth hearing.
A...problem with Cain’s argument is her assumption that most introverts are actually suffering in their self-esteem. This may be true in the sorts of environments — Harvard Business School, corporate boardrooms, executive suites — that she knows best and appears to spend most of her time thinking about. Had she spent more time in other sorts of places, among other types of people — in research laboratories, for example, or among economists rather than businessmen and -women — she would undoubtedly have discovered a world of introverts quite contented with who they are, and who feel that the world has been good to them.

Friday, April 19, 2013

Free Will, continued - Prior unconscious brain activity predicts choices for abstract intentions!

I've been running a thread on free will and neuroscience in this blog, recently noting comments by Nahmias:
...As long as people understand that discoveries about how our brains work do not mean that what we think or try to do makes no difference to what happens, then their belief in free will is preserved. What matters to people is that we have the capacities for conscious deliberation and self-control that I’ve suggested we identify with free will.
...None of the evidence marshaled by neuroscientists and psychologists suggests that those neural processes involved in the conscious aspects of such complex, temporally extended decision-making are in fact causal dead ends. It would be almost unbelievable if such evidence turned up.
Almost unbelievable appears to have happened, with this from Soon et al.. Interestingly, they identified a partial spatial and temporal overlap of choice-predictive signals with activity in the default mode network I reviewed in this past Monday's post. The abstract:
Unconscious neural activity has been repeatedly shown to precede and potentially even influence subsequent free decisions. However, to date, such findings have been mostly restricted to simple motor choices, and despite considerable debate, there is no evidence that the outcome of more complex free decisions can be predicted from prior brain signals. Here, we show that the outcome of a free decision to either add or subtract numbers can already be decoded from neural activity in medial prefrontal and parietal cortex 4 s before the participant reports they are consciously making their choice. These choice-predictive signals co-occurred with the so-called default mode brain activity pattern that was still dominant at the time when the choice-predictive signals occurred. Our results suggest that unconscious preparation of free choices is not restricted to motor preparation. Instead, decisions at multiple scales of abstraction evolve from the dynamics of preceding brain activity.
And, a chunk from their discussion:
It is interesting that mental calculation, the more complex task, had less predictive lead time than a simple binary motor choice in our previous study. This could tentatively reflect a general limitation of unconscious processing in the sense that unconscious processes might be restricted in their ability to develop or stabilize complex representations such as abstract intentions. It is also worth noting that both studies showed the same dissociation between cortical regions that were predictive of the content versus the timing of the decision. This implies that the formation of an intention to act depends on interactions between the choice-predictive and time-predictive regions. The temporal profile of this interaction is likely to determine when the earliest choice-predictive information is available and might differ between tasks.
There was a partial spatial overlap between the choice-predictive brain regions and the DMN. Interestingly, the state of the DMN (default mode network) during the early preparatory phase still resembled that during off-task or “resting” periods. This lends further credit to the notion that the preparatory signals were not a result of conscious engagement with the task. Furthermore, the spatial and temporal overlap hints at a potential involvement of the DMN in unconscious choice preparation.
To summarize, we directly investigated the formation of spontaneous abstract intentions and showed that the brain may already start preparing for a voluntary action up to a few seconds before the decision enters into conscious awareness. Importantly, these results cannot be explained by motor preparation or general attentional mechanisms. We found that frontopolar and precuneus/posterior cingulate encoded the content of the upcoming decision, but not the timing. In contrast, the pre-SMA predicted the timing of the decision, but not the content.

Thursday, April 18, 2013

Showing where moral intent is determined in our brains.

Interesting work from Koster-Hale et al:
Intentional harms are typically judged to be morally worse than accidental harms. Distinguishing between intentional harms and accidents depends on the capacity for mental state reasoning (i.e., reasoning about beliefs and intentions), which is supported by a group of brain regions including the right temporo-parietal junction (RTPJ). Prior research has found that interfering with activity in RTPJ can impair mental state reasoning for moral judgment and that high-functioning individuals with autism spectrum disorders make moral judgments based less on intent information than neurotypical participants. Three experiments, using multivoxel pattern analysis, find that (i) in neurotypical adults, the RTPJ shows reliable and distinct spatial patterns of responses across voxels for intentional vs. accidental harms, and (ii) individual differences in this neural pattern predict differences in participants’ moral judgments. These effects are specific to RTPJ. By contrast, (iii) this distinction was absent in adults with autism spectrum disorders. We conclude that multivoxel pattern analysis can detect features of mental state representations (e.g., intent), and that the corresponding neural patterns are behaviorally and clinically relevant.

Wednesday, April 17, 2013

Brain training games don't actually make you smarter.

Wow...after having done several posts uncritically passing on studies by Jaeggi and others claiming that games to improve working memory, such as the n-Back game, increase cognitive skills in other areas, a number of studies have failed to replicate these phenomena. Gareth Cook has done an interesting article on this in The New Yorker, suggesting that claims made by commercial software sites like Cogmen, Lumosity, and CogniFit are bogus.
Over the last year, however, the idea that working-memory training has broad benefits has crumbled. One group of psychologists, lead by a team at Georgia Tech, set out to replicate the Jaeggi findings, but with more careful controls and seventeen different cognitive-skills tests. Their subjects showed no evidence whatsoever for improvement in intelligence. They also identified a pattern of methodological problems with experiments showing positive results, like poor controls and a reliance on a single measure of cognitive improvement. This failed replication was recently published in one of psychology’s top journals, and another, by a group at Case Western Reserve University, has been published since.
The recent meta-analysis, led by Monica Melby-Lervåg, of the University of Oslo, and also published in a top journal, is even more damning. Some studies are more convincing than others, because they include more subjects and show a larger effect. Melby-Lervåg’s paper laboriously accounts for this, incorporating what Jaeggi, Klingberg, and everyone else had reported. The meta-analysis found that the training isn’t doing anyone much good. If anything, the scientific literature tends to overstate effects, because teams that find nothing tend not to publish their papers. (This is known as the “filedrawer” effect.) A null result from meta-analysis, published in a top journal, sends a shudder through the spine of all but the truest of believers. In the meantime, a separate paper by some of the Georgia Tech scientists looked specifically at Cogmed’s training, which has been subjected to more scientific scrutiny than any other program. “The claims made by Cogmed,” they wrote, “are largely unsubstantiated.”

Tuesday, April 16, 2013

Older brains - just as much nerve firing, but scrambled connections?

A group of colleagues at Imperial College London and Tsinghua University in Beijing have fitted glass windows on the skulls of old and young mice. Contrary to expectation they observe that older mice have more firing points than younger ones, but they are more erratic in their activity than in younger mice, with high turnover rates and wavering firing strengths. The older mice also performed less well on a memory test. The suggestion then is that the mental decline seen in aging may be due more do disorderly wiring than to loss of nerve cells. Their abstract:
Aging is a major risk factor for many neurological diseases and is associated with mild cognitive decline. Previous studies suggest that aging is accompanied by reduced synapse number and synaptic plasticity in specific brain regions. However, most studies, to date, used either postmortem or ex vivo preparations and lacked key in vivo evidence. Thus, whether neuronal arbors and synaptic structures remain dynamic in the intact aged brain and whether specific synaptic deficits arise during aging remains unknown. Here we used in vivo two-photon imaging and a unique analysis method to rigorously measure and track the size and location of axonal boutons in aged mice. Unexpectedly, the aged cortex shows circuit-specific increased rates of axonal bouton formation, elimination, and destabilization. Compared with the young adult brain, large (i.e., strong) boutons show 10-fold higher rates of destabilization and 20-fold higher turnover in the aged cortex. Size fluctuations of persistent boutons, believed to encode long-term memories, also are larger in the aged brain, whereas bouton size and density are not affected. Our data uncover a striking and unexpected increase in axonal bouton dynamics in the aged cortex. The increased turnover and destabilization rates of large boutons indicate that learning and memory deficits in the aged brain arise not through an inability to form new synapses but rather through decreased synaptic tenacity. Overall our study suggests that increased synaptic structural dynamics in specific cortical circuits may be a mechanism for agerelated cognitive decline.

Monday, April 15, 2013

A review - Mindfulness meditation and our brain's default versus attentional networks.

I've been doing some homework on potential topics to work up into a lecture, one of them being brain correlates of various meditative, attentional, or default mode states. The vocabulary used is sometimes contradictory between papers, but two categories emerge. One uses terms for thought like default, narrative focus, phenomenal, social reasoning, theory of mind, baseline setting, self referential, introspective, and stimulus independent. The contrasting descriptors are attentional, direct experience, experiential focus, task positive network, physical cause/effect reasoning.

This cooks down roughly to distinguishing between brain networks whose primary activity occurs during internal narrative focus versus those activated during direct attentional experience.

In reviewing previous mindblog posts on the default network I come up with a partial bibliography of reviews and experiments, and thought some readers might find it useful, a list in no particular order, with brief notes:

Reciprocal repression (mutual inhibition) between networks - nice graphics  - some muddying of definitions

Relationship of this mutual inhibition to mindfulness meditation , which notes Farb et al., 2007

Review (NYTimes) on power of concentration - mindfulness training causing increased connectivity in attentional and default networks. 

Review with graphics of MRI of default network activated by autobiographical memory, envisioning future, theory of mind, moral decision making. 

Tierney - virtues of a wandering mind.  (context, larger agenda, creativity)

Review of varieties of resting state activity

Change between operating systems during eyeblink.

Different components of default mode active in different kinds of memory.

Mental time travel and default network.

Synchronization of both modes between individuals.

Default network can be realized by multiple architectures (split brain patients).

Default network as underpinning of cerebral ‘connectome‘  - good graphic.

Development of human default network from being sparsely functionally connected at 7-9  years.







Default mode in Chimps and Monkeys

Association of default network with midline structures.

Friday, April 12, 2013

Why old folks more easy lose their way,

Wiener et al. make observations that shed light on why older people have more difficulty finding their car in a shopping mall's large parking lot if they exit the mall by a different door than they used to enter it (or follow directions involving an intersection if they approach the intersection from a different direction than the one used for learning them.) From their introduction:
Everyday navigation can be based on different strategies. The hippocampus plays a key role in cognitive map or place strategies that rely on allocentric processing, whereas the parietal cortex and striatal circuits are involved in route or response strategies...To test the hypothesis that cognitive aging not only results in a shift away from allocentric strategies but in a specific preference for beacon-based strategies, we developed a novel experimental paradigm: participants first learned a route along a number of intersections and were then asked to rejoin the original route approaching the intersections from different directions. Trials in which participants approached the intersections from a direction different from that during training (see Fig. 1) allowed us (1) to compare the use and adoption of route-learning strategies between young and older participants and (2) to test for specific preferences for beacon-based strategies in older participants.
The abstract:
Efficient spatial navigation requires not only accurate spatial knowledge but also the selection of appropriate strategies. Using a novel paradigm that allowed us to distinguish between beacon, associative cue, and place strategies, we investigated the effects of cognitive aging on the selection and adoption of navigation strategies in humans. Participants were required to rejoin a previously learned route encountered from an unfamiliar direction. Successful performance required the use of an allocentric place strategy, which was increasingly observed in young participants over six experimental sessions. In contrast, older participants, who were able to recall the route when approaching intersections from the same direction as during encoding, failed to use the correct place strategy when approaching intersections from novel directions. Instead, they continuously used a beacon strategy and showed no evidence of changing their behavior across the six sessions. Given that this bias was already apparent in the first experimental session, the inability to adopt the correct place strategy is not related to an inability to switch from a firmly established response strategy to an allocentric place strategy. Rather, and in line with previous research, age-related deficits in allocentric processing result in shifts in preferred navigation strategies and an overall bias for response strategies. The specific preference for a beacon strategy is discussed in the context of a possible dissociation between beacon-based and associative-cue-based response learning in the striatum, with the latter being more sensitive to age-related changes.

Thursday, April 11, 2013

Defining when a visual stimulus become conscious to us.

Llinás and collaborators do a nice dissection of our conscious versus unconscious visual processing and note the timing (240 milliseconds) of a brain signal that correlates with our conscious awareness of a stimulus. (This is the same time epoch that I evoke in the "millisecond manager" term I use in several of my essays in the left column of this blog - a period during which we can note the onset of a visual or emotional perception before further action or interpretation begins.)
At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated.

Wednesday, April 10, 2013

Your smartphone, your social brain, and your vagus nerve.

I am always struck, when I go a local Starbucks for my noon coffee or do a happy hour at a local bar, that the great majority of those present are staring intently at their smartphones or tablets, while sitting in an environment meant to encourage interaction. As we spend less and less time engaging in face to face positive social contact in public places, what are we losing? The increasing aversion to human contact exhibited by people addicted to staring at their small screens suggests that our social brain follows the same rule as the rest of our brain and body: "Use it or loose it." I've done a post pointing to how modern hi-tech dialog devices fail to engage the evolved brain and body synchronization that accompanies face-to-face dialog.

In a recent NYTimes piece, Barabara Fredricksen describes some of her recent work on countering the toxic effects of isolation from direction person-to-person contact. Some clips, to which I have added a few reference links:
My research team and I conducted a longitudinal field experiment on the effects of learning skills for cultivating warmer interpersonal connections in daily life. Half the participants, chosen at random, attended a six-week workshop on an ancient mind-training practice known as metta, or “lovingkindness,” that teaches participants to develop more warmth and tenderness toward themselves and others....We discovered that the meditators not only felt more upbeat and socially connected; but they also altered a key part of their cardiovascular system called vagal tone. Scientists used to think vagal tone was largely stable, like your height in adulthood. Our data show that this part of you is plastic, too, and altered by your social habits.
To appreciate why this matters, here’s a quick anatomy lesson. Your brain is tied to your heart by your vagus nerve. Subtle variations in your heart rate reveal the strength of this brain-heart connection, and as such, heart-rate variability provides an index of your vagal tone. By and large, the higher your vagal tone the better. It means your body is better able to regulate the internal systems that keep you healthy, like your cardiovascular, glucose and immune responses.
Beyond these health effects, the behavioral neuroscientist Stephen Porges has shown that vagal tone is central to things like facial expressivity and the ability to tune in to the frequency of the human voice. By increasing people’s vagal tone, we increase their capacity for connection, friendship and empathy.
In short, the more attuned to others you become, the healthier you become, and vice versa. This mutual influence also explains how a lack of positive social contact diminishes people. Your heart’s capacity for friendship also obeys the biological law of “use it or lose it.” If you don’t regularly exercise your ability to connect face to face, you’ll eventually find yourself lacking some of the basic biological capacity to do so.

Tuesday, April 09, 2013

Mindfulness training improves working memory and cognitive performance while reducing mind wandering.

Yet another study, by Mrazek et al., on the salutary effects of mindfulness:
Given that the ability to attend to a task without distraction underlies performance in a wide variety of contexts, training one’s ability to stay on task should result in a similarly broad enhancement of performance. In a randomized controlled investigation, we examined whether a 2-week mindfulness-training course would decrease mind wandering and improve cognitive performance. Mindfulness training improved both GRE reading-comprehension scores and working memory capacity while simultaneously reducing the occurrence of distracting thoughts during completion of the GRE and the measure of working memory. Improvements in performance following mindfulness training were mediated by reduced mind wandering among participants who were prone to distraction at pretesting. Our results suggest that cultivating mindfulness is an effective and efficient technique for improving cognitive function, with wide-reaching consequences.
(The GRE is the Graduate Record Examination meant to test cognitive capacity of graduate school applicants. Readers interested in the details of the experiment, performed on the usual batch of ~50 college undergraduates, can email me.)

Monday, April 08, 2013

Alteration of paralimbic self awareness circuits in behavioral addiction.

Changeux and collaborators look at the brain correlates of pathological gambling, evaluating whether addictions might occur because of a predisposition linked to abnormal functioning of a frontal circuitry associated with self awareness, preceding any use of drugs. Some clips:
The introduction of magnetoencephalography (MEG) has made it possible to study neural mechanisms even in deeper parts of the cortex with a high degree of temporal resolution in combination with a decent spatial resolution. This allows investigation of one of the major networks of the brain, the paralimbic interaction between the medial prefrontal/anterior cingulate (ACC) and medial parietal/posterior cingulate (PCC) cortices. This interaction has in several recent studies been associated with self-awareness.

Schematic representation of the medial cortical components of the paralimbic network of self-awareness. Schematic localization of the medial sources for MEG registration. Red, ACC; Blue, PCC.




They compared 14 pathological gamblers and 11 age- and sex-matched controls using a stop-signal task consisting of “go” and “nogo” trials. In go trials, the participant is instructed to press a button as soon as an “O” appears on the screen. In nogo trials, the O is followed by an “X,” and the participant is instructed to withhold his response. The task can be used to measure a number of variables associated with impulsivity such as the stop-signal reaction time (SSRT), which is the time required for the stop signal to be processed so a response can be withheld. In particular, the SSRT has been widely used as a valid measure of impulsivity in general, and in studies of patients suffering from addiction.
The main finding of the present study was that behavioral addiction is linked to abnormal activity in, and communication between, nodal regions of the paralimbic network of self-awareness, the ACC and PCC, which are effective in different aspects of self-awareness processing. Pathological gamblers had lower synchronization between the ACC and PCC at rest in the high gamma band compared with controls, and failed to show an increase in gamma synchronization during rest compared with the task (as observed in controls). These findings could not be attributed to previous drug abuse or smoking habits. Furthermore, pathological gamblers without previous drug abuse had lower PCC power than controls and gamblers with previous stimulant abuse during the stop-signal task. In contrast, a history of stimulant abuse in gamblers caused a marked increase in power across regions and frequencies both at rest and during the stop-signal task.

Friday, April 05, 2013

Training our emotional brain - improving affective control.

Schweizer et al. suggest that our ability to keep a cool head in emotionally charged situations can be enhanced by working memory training, because both functions are associated with the same frontoparietal neural circuitry, including the dorsolateral prefrontal cortex (PFC), the inferior parietal and the anterior cingulate cortices, that can exert downregulatory effects on experienced emotional distress through projections to the amygdala and midbrain nuclei from the lateral and medial PFC components. Here is their abstract, followed by a description of the emotional working memory (not regular working memory) training that was evaluated.
Affective cognitive control capacity (e.g., the ability to regulate emotions or manipulate emotional material in the service of task goals) is associated with professional and interpersonal success. Impoverished affective control, by contrast, characterizes many neuropsychiatric disorders. Insights from neuroscience indicate that affective cognitive control relies on the same frontoparietal neural circuitry as working memory (WM) tasks, which suggests that systematic WM training, performed in an emotional context, has the potential to augment affective control. Here we show, using behavioral and fMRI measures, that 20 d of training on a novel emotional WM protocol successfully enhanced the efficiency of this frontoparietal demand network. Critically, compared with placebo training, emotional WM training also accrued transfer benefits to a “gold standard” measure of affective cognitive control–emotion regulation. These emotion regulation gains were associated with greater activity in the targeted frontoparietal demand network along with other brain regions implicated in affective control, notably the subgenual anterior cingulate cortex. The results have important implications for the utility of WM training in clinical, prevention, and occupational settings.
A description of the training:
The emotional working memory training... comprised an affective dual n-back task consisting of a series of trials each of which simultaneously presented a face (for 500 ms) on a 4 × 4 grid on a monitor and a word (for 500–950 ms) over headphones. Each picture-word pair was followed by a 2500 ms interval during which participants responded via button press if either/both stimuli from the pair matched the corresponding stimuli presented n positions back; 60% of the words (e.g., evil, rape) and faces (fearful, angry, sad, or disgusted expressions) were emotionally negative with the others affectively neutral in tone. Trial presentation order was randomized across training sessions.

Thursday, April 04, 2013

Impersonating your younger self makes your body physiologically younger - a rediscovered post.

For several years I've been trying to find or recall a MindBlog post or an article read, and couldn't come up with it. A blog reader sent an email recalling it, and I couldn't find it. FINALLY, on doing a string search in this blog (for 'mindfulness') I found it, an August 2010 post that I had given the misleading title of "The Psychology of Possibility." It referenced an article in Harvard Magazine on the work of Ellen Langer (1,2,3). Some of her early work is fascinating, and the post is worth repeating here:

An interesting article in the Harvard Magazine describes the life work of Ellen Langer, her demonstrations that our social self image (old versus young, for example) strongly patterns our actual vitality and physiology, her work on Mindfulness, unconscious processing, etc. I recommend that you read the article. Here are some clips from its beginning that hooked me (I actually did my own mini-repeat of the experiment described, a simple self-experiment of pretending that I had been transported back in time to 40 years ago, and convinced myself I was experiencing some of the effects described)...
In 1981, early in her career at Harvard, Ellen Langer and her colleagues piled two groups of men in their seventies and eighties into vans, drove them two hours north to a sprawling old monastery in New Hampshire, and dropped them off 22 years earlier, in 1959. The group who went first stayed for one week and were asked to pretend they were young men, once again living in the 1950s. The second group, who arrived the week afterward, were told to stay in the present and simply reminisce about that era. Both groups were surrounded by mid-century mementos—1950s issues of Life magazine and the Saturday Evening Post, a black-and-white television, a vintage radio—and they discussed the events of the time: the launch of the first U.S. satellite, Castro’s victory ride into Havana, Nikita Khrushchev and the need for bomb shelters.

...Before and after the experiment, both groups of men took a battery of cognitive and physical tests, and after just one week, there were dramatic positive changes across the board. Both groups were stronger and more flexible. Height, weight, gait, posture, hearing, vision—even their performance on intelligence tests had improved. Their joints were more flexible, their shoulders wider, their fingers not only more agile, but longer and less gnarled by arthritis. But the men who had acted as if they were actually back in 1959 showed significantly more improvement. Those who had impersonated younger men seemed to have bodies that actually were younger.