Monday, August 15, 2011

How google effects our memory.

Daniel Wegner can be counted on to be always coming up with interesting stuff.  Here he does a series of experiments showing how google is taking a load off our explicit memory storage habits (of the sort that occurred in the transition from the oral tradition to writing). As most of us know from our daily experience, google is replacing books and encyclopedias as our main group or transactive memory, and we become increasingly able to remember where information is stored better than remembering the information itself:
The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.

Friday, August 12, 2011

The art of musical notation - scoring outside the lines

Pat Muchmore writes a fascinating piece on musical scores that are presented in a more exotic form than the standard music clef notations that I and other musicians have spend many thousands of hours with. He uses the term ergodic notations,
...which I derive from the game and literary theorist Espen J. Aarseth’s phrase “ergodic literature.” These are writings that require some amount of effort to read beyond simply moving one’s eyes and flipping pages. There are ancient examples, such as Egyptian texts that span several walls across several rooms or, more recently, Islamic calligrams that render Arabic words like Allah and Bismillah in many different directions and scales.
The article contains numerous modern examples of ergodic scores, and also notes:
Ergodic notation is not new. Baude Cordier, a composer of the ars subtilior style, wrote during the first half of the 15th century. He created many graphic scores, one of the most elegant of which is for a piece called “Belle, bonne, sage.”
“Belle, bonne, sage” by Baude Cordier.


It’s a love song, so it’s rendered in the shape of a heart. The performance is essentially unaffected by the shape, but we needn’t condemn it — it’s a beautiful addition to the artwork. Furthermore, not every visual element is purely decorative, the red notes indicate a rhythmic alteration that was otherwise very difficult to notate at the time.
The article provides further examples of ergodic notations from modern composers George Crumb, Peter Maxwell Davies, John Cage. It also includes notation and audio files of a composition by the author.

Thursday, August 11, 2011

Cowboys and Pit Crews

Yesterday's posting on Atul Gawande's writing has reminded of his more recent essay "Cowboys and Pit Crews," on medical practice, which has been languishing in my list of potential posts:
The core structure of medicine—how health care is organized and practiced—emerged in an era when doctors could hold all the key information patients needed in their heads and manage everything required themselves. One needed only an ethic of hard work, a prescription pad, a secretary, and a hospital willing to serve as one’s workshop, loaning a bed and nurses for a patient’s convalescence, maybe an operating room with a few basic tools. We were craftsmen. We could set the fracture, spin the blood, plate the cultures, administer the antiserum. The nature of the knowledge lent itself to prizing autonomy, independence, and self-sufficiency among our highest values, and to designing medicine accordingly. But you can’t hold all the information in your head any longer, and you can’t master all the skills. No one person can work up a patient’s back pain, run the immunoassay, do the physical therapy, protocol the MRI, and direct the treatment of the unexpected cancer found growing in the spine.

Everyone has just a piece of patient care. We’re all specialists now—even primary-care doctors. A structure that prioritizes the independence of all those specialists will have enormous difficulty achieving great care.

We don’t have to look far for evidence. Two million patients pick up infections in American hospitals, most because someone didn’t follow basic antiseptic precautions. Forty per cent of coronary-disease patients and sixty per cent of asthma patients receive incomplete or inappropriate care. And half of major surgical complications are avoidable with existing knowledge. It’s like no one’s in charge—because no one is. The public’s experience is that we have amazing clinicians and technologies but little consistent sense that they come together to provide an actual system of care, from start to finish, for people. We train, hire, and pay doctors to be cowboys. But it’s pit crews people need.

Recently, you might be interested to know, I met an actual cowboy. He described to me how cowboys do their job today, herding thousands of cattle. They have tightly organized teams, with everyone assigned specific positions and communicating with each other constantly. They have protocols and checklists for bad weather, emergencies, the inoculations they must dispense. Even the cowboys, it turns out, function like pit crews now. It may be time for us to join them.

Wednesday, August 10, 2011

Atul Gawande on aging.

I've been assembling a short list of possible essay/lecture topics, and one of the putative titles is "You're gonna die... get over it." It would be in the spirit of a crisp and clear essay in The New Yorker by Atul Gawande titled "The way we age now," which I have posted before and re-post here:

...one the best articles on aging that I have read, written by Atul Gawande (Asst. Prof. in the Harvard School of Public Health, and staff writer for the New Yorker Magazine). The article appears in the April 30 issue of the New Yorker.

Some clips:

Even though some genes have been shown to influence longevity in worms, fruit flies, and mice..
...scientists do not believe that our life spans are actually programmed into us.. (Deric note: in the post I just prepared for next Tuesday, this point is contested). After all, for most of our hundred-thousand-year existence—all but the past couple of hundred years—the average life span of human beings has been thirty years or less...Today, the average life span in developed countries is almost eighty years. If human life spans depend on our genetics, then medicine has got the upper hand. We are, in a way, freaks living well beyond our appointed time. So when we study aging what we are trying to understand is not so much a natural process as an unnatural one...

...complex systems—power plants, say—have to survive and function despite having thousands of critical components. Engineers therefore design these machines with multiple layers of redundancy: with backup systems, and backup systems for the backup systems. The backups may not be as efficient as the first-line components, but they allow the machine to keep going even as damage accumulates...within the parameters established by our genes, that’s exactly how human beings appear to work. We have an extra kidney, an extra lung, an extra gonad, extra teeth. The DNA in our cells is frequently damaged under routine conditions, but our cells have a number of DNA repair systems. If a key gene is permanently damaged, there are usually extra copies of the gene nearby. And, if the entire cell dies, other cells can fill in.

Nonetheless, as the defects in a complex system increase, the time comes when just one more defect is enough to impair the whole, resulting in the condition known as frailty. It happens to power plants, cars, and large organizations. And it happens to us: eventually, one too many joints are damaged, one too many arteries calcify. There are no more backups. We wear down until we can’t wear down anymore.
Gawande proceeds to a discussion of social and medical consequences of people over 65 becoming 20% of the population.
Improvements in the treatment and prevention of heart disease, respiratory illness, stroke, cancer, and the like mean that the average sixty-five-year-old can expect to live another nineteen years—almost four years longer than was the case in 1970. (By contrast, from the nineteenth century to 1970, sixty-five-year-olds gained just three years of life expectancy.)

The result has been called the “rectangularization” of survival. Throughout most of human history, a society’s population formed a sort of pyramid: young children represented the largest portion—the base—and each successively older cohort represented a smaller and smaller group. In 1950, children under the age of five were eleven per cent of the U.S. population, adults aged forty-five to forty-nine were six per cent, and those over eighty were one per cent. Today, we have as many fifty-year-olds as five-year-olds. In thirty years, there will be as many people over eighty as there are under five.

Americans haven’t come to grips with the new demography. We cling to the notion of retirement at sixty-five—a reasonable notion when those over sixty-five were a tiny percentage of the population, but completely untenable as they approach twenty per cent. People are putting aside less in savings for old age now than they have in any decade since the Great Depression. More than half of the very old now live without a spouse, and we have fewer children than ever before—yet we give virtually no thought to how we will live out our later years alone.

...medicine has been slow to confront the very changes that it has been responsible for—or to apply the knowledge we already have about how to make old age better. Despite a rapidly growing elderly population, the number of certified geriatricians fell by a third between 1998 and 2004.

Tuesday, August 09, 2011

Do 18-month old humans have a theory of mind?

Senju et al., following up on an experiment by Meltzoff and Brooks, use a rather clever experimental design to show that 18-month old children can attribute false beliefs to others, a capacity previously thought to appear only after 3-4 years:
In the research reported here, we investigated whether 18-month-olds would use their own past experience of visual access to attribute perception and consequent beliefs to other people. Infants in this study wore either opaque blindfolds (opaque condition) or trick blindfolds that looked opaque but were actually transparent (trick condition). Then both groups of infants observed an actor wearing one of the same blindfolds that they themselves had experienced, while a puppet removed an object from its location. Anticipatory eye movements revealed that infants who had experienced opaque blindfolds expected the actor to behave in accordance with a false belief about the object’s location, but that infants who had experienced trick blindfolds did not exhibit that expectation. Our results suggest that 18-month-olds used self-experience with the blindfolds to assess the actor’s visual access and to update her belief state accordingly. These data constitute compelling evidence that 18-month-olds infer perceptual access and appreciate its causal role in altering the epistemic states of other people.

Monday, August 08, 2011

In a nutshell....

I have to pass on the cover of the current New Yorker Magazine:


Effects of oxytocin in humans - a critical review

Over the past several years MindBlog has posted examples from the outpouring of work on the "trust hormone" oxytocin. Trends in Cognitive Science offers open access to this more critical and balanced review by Bartz et al. Their abstract:
Building on animal research, the past decade has witnessed a surge of interest in the effects of oxytocin on social cognition and prosocial behavior in humans. This work has generated considerable excitement about identifying the neurochemical underpinnings of sociality in humans, and discovering compounds to treat social functioning deficits. Inspection of the literature, however, reveals that the effects of oxytocin in the social domain are often weak and/or inconsistent. We propose that this literature can be informed by an interactionist approach in which the effects of oxytocin are constrained by features of situations and/or individuals. We show how this approach can improve understanding of extant research, suggest novel mechanisms through which oxytocin might operate, and refine predictions about oxytocin pharmacotherapy.
By the way, the same issue of Trends in Cognitive Science has a brief note by van Honk et al. on testosterone as a social hormone, also noting the complexity of hormone-behavior relationships (PDF here).

Friday, August 05, 2011

Macho mice make manly melodies.

Susan Reardon points to work by work of Pasch et al at U of F at Gainesville, who compared the songs of castrated male mice (singing mice from Costa Rica) with males having a male hormone implant. Females were attracted to speakers playing recordings of the songs of hormonally encanced males. (audio file here, video file in links above).

Thursday, August 04, 2011

Boredom - a Lively History

Peter Toohey's book with the title of this post is reviewed by Anthony Gottlieb in the NYTimes:
In Oscar Wilde’s play “A Woman of No Importance,” Lord Illingworth says of society: “To be in it is merely a bore. But to be out of it simply a tragedy.” To be a bore oneself is the ultimate failing and makes one the target for a quintessentially English put-down. “Even the grave yawns for him,” the actor and theater manager Sir Herbert Beerbohm Tree once said of an earnest writer. ...it was (and still is) regarded in some quarters as stylish and rather aristocratic to suffer from boredom, so the English ought really to thank their bores for providing them with the occasion to display wit and appear grand.

Toohey...suggests that the unpleasant feeling of simple boredom developed as a warning signal to steer us away from social situations that are “confined, predictable, too samey for one’s sanity.” In other words, it is a useful aversion: the discomfort of boredom is a blessing in disguise...a colleague of his once argued that there isn’t really any such thing as boredom, just a blurring together of a constellation of feelings and moods — frustration, surfeit, apathy and the like. Toohey rejects this idea, and perhaps there is indeed little harm in keeping the word, provided that one is vigilantly aware of the loose, subjective and confusing ways in which it is often used. When the actor George Sanders — the archetypal cad, at least on-screen, and in the title of his autobiography — committed suicide in a Spanish hotel in 1972, he left a note that began: “Dear World, I am leaving because I am bored.” It is worth noting that he was ill, lonely and had sold his beloved house on Majorca. Was boredom really what his death was about? When a man says he is bored — as Oscar Wilde never quite got round to saying — it sometimes means that he cannot be bothered to tell you what really ails him.

Wednesday, August 03, 2011

Collectivism promotes bribery

From Mazar and Aggarwal:
Why are there national differences in the propensity to bribe? To investigate this question, we conducted a correlational study with cross-national data and a laboratory experiment. We found a significant effect of the degree of collectivism versus individualism present in a national culture on the propensity to offer bribes to international business partners. Furthermore, the effect was mediated by individuals’ sense of responsibility for their actions. Together, these results suggest that collectivism promotes bribery through lower perceived responsibility for one’s actions.
later note: I forgot to put the link to this article, it's now added.

Tuesday, August 02, 2011

Diversity is Universal

Here is an interesting little nugget from Joan Chiao:
At every level in the vast and dynamic world of living things lies diversity. From biomes to biomarkers, the complex array of solutions to the most basic problems regarding survival in a given environment afforded to us by nature is riveting. In the world of humans alone, diversity is apparent in the genome, in the brain and in our behavior.

The mark of multiple populations lies in the fabric of our DNA. The signature of selfhood in the brain holds dual frames, one for thinking about one's self as absolute, the other in context of others. From this biological diversity in humans arises cultural diversity directly observable in nearly every aspect of how people think, feel and behavior. From classrooms to conventions across continents, the range and scope of human activities is stunning.

Recent centuries have seen the scientific debate regarding the nature of human nature cast as a dichotomy between diversity on the one hand and universalism on the other. Yet a seemingly paradoxical, but tractable, scientific concept that may enhance our cognitive toolkit over time is the simple notion that diversity is universal.

Monday, August 01, 2011

The sunny side of smut.

Coming across an article with the same title as this post gave me an immediate flashback to my days at Harvard, when as a graduate student and resident tutor in Winthrop House I would invite down various campus notables to have dinner in the dining hall at a table with my students (coats and ties were still required then), after which we retired to the common room for a chat over sherry (sigh....the good old days). The guest I'm remembering was the famous psychologist B.F. Skinner, whose immediate response, when he was asked how he managed to remain so vital at his advanced age, was "I read pornography." Here are a few clips from the article in the Scientific American on this topics by Moyer:
...Now pornography is just one Google search away, and much of it is free. Age restrictions have become meaningless, too, with the advent of social media—one teenager in five has sent or posted naked pictures of themselves online...Certainly pornography addiction or overconsumption seems to cause relationship problems...But what about the more casual exposure typical of most porn users?...“There’s absolutely no evidence that pornography does anything negative,” says Milton Diamond​, director of the Pacific Center for Sex and Society at the University of Hawaii at Manoa. “It’s a moral issue, not a factual issue.”...Perhaps the most serious accusation against pornography is that it incites sexual aggression. But not only do rape statistics suggest otherwise, some experts believe the consumption of pornography may actually reduce the desire to rape by offering a safe, private outlet for deviant sexual desires...as access to pornography grew in once restrictive Japan, China and Denmark in the past 40 years, rape statistics plummeted. Within the U.S., the states with the least Internet access between 1980 and 2000—and therefore the least access to Internet pornography—experienced a 53 percent increase in rape incidence, whereas the states with the most access experienced a 27 percent drop in the number of reported rapes .

It is important to note that these associations are just that—associations. They do not prove that pornography is the cause of the observed crime reductions. Nevertheless, the trends just don’t fit with the theory that rape and sexual assault are in part influenced by pornography...patients requesting treatment in clinics for sex offenders commonly say that pornography helps them keep their abnormal sexuality within the confines of their imagination. Pornography seems to be protective...perhaps because exposure correlates with lower levels of sexual repression, a potential rape risk factor.

Friday, July 29, 2011

MindBlog retrospective: A new description of our inner lives.

This is another of my old posts that emerged from the retrospective scan of this blog that I did recently, another interesting perspective I don't want to loose touch with. It drew a number of comments, and a second post several months later discussed them. Here is a repeat of the original post:

I rarely mention my internal experience and sensations on this blog - first, because I have viewed readers as "wanting the beef," objective stuff on how minds work. Second and more important, because my experience of noting the flow of my brain products as emotion laced chunks of sensing/cognition/action - knowing the names of the neurotransmitters and hormones acting during desire, arousal, calming, or affiliation - strikes me as a process which would feel quite alien to most people. Still, if we are materialists who believe that someday we will understand how the brain-body generates our consciousness and sense of a self, we will be able to think in terms like the following (a quote taken from Larissa MacFarquhar's profile of Paul and Patricia Churchland in the Feb. 12 New Yorker Magazine):

"...he and Pat like to speculate about a day when whole chunks of English, especially the bits that consitute folk psychology, are replaced by scientific words that call a thing by its proper name rather than some outworn metaphor... as people learn to speak differently they will learn to experience differently, and sooner or later even their most private introspections will be affected. Already Paul feels pain differently than he used to: when he cut himself shaving now he fells not "pain" but something more complicated - first the sharp, superficial A-delta-fibre pain, and then a couple of seconds later, the sickening, deeper feeling of C-fibre pain that lingers. The new words, far from being reductive or dry, have enhanced his sensations, he feels, as an oenophile's complex vocabulary enhances the taste of wine."

"Paul and Pat, realizing that the revolutionary neuroscience they dream of is still in its infancy, are nonetheless already preparing themselve for this future, making the appropriate adjustments in their everyday conversation. One afternoon recently, Paul says, he was home making dinner when Pat burst in the door, having come straight from a frustrating faculty meeting. "She said, 'Paul, don't speak to me, my serotonin levels have hit bottom, my brain is awash in glucocortocoids, my blood vessels are full of adrenaline, and if it weren't for my endogenous opiates I'd have driven the car into a tree on the way home. My dopamine levels need lifting. Pour me a Chardonnay, and I'll be down in a minute.' " Paul and Pat have noticed that it is not just they who talk this way - their students now talk of psychopharmacology as comfortably as of food."

Thursday, July 28, 2011

The utility of being vague.

I'm just getting to glance at the last few issue of Psychological Science, and find this gem, "In Praise of Vagueness" by Mishra et al., which they introduce as follows:
People are increasingly surrounded by devices that provide highly precise information. For instance, technologically advanced bathroom scales can now give measurements of weight, body fat, and hydration levels within two and even three decimal places. People can find out exactly how many calories they are eating, how much weight they can lift, and how many steps they walk in a typical day. The overarching belief exemplified by the use of such technologies could be summed up by the phrase, “If I can measure it, I can manage it.” In other words, people seem to believe that precise information increases their likelihood of performing better and meeting personal goals (e.g., improving physical strength or losing weight). People generally prefer precise information over vague information because precise information gives them a greater sense of security and confidence in their ability to predict unknown outcomes in their environment. Despite this preference, we have found that vague information sometimes serves people better than precise information does.

Why might individuals perform better when they receive vague information than when they receive precise information? We posit that vague information allows individuals leeway in interpretation so that they form expectancies in accordance with the outcomes that they desire. Further, we posit that these positive expectancies can give rise to favorable performance-related outcomes.
Their experiments examined the progress of people towards goals when they were given precise versus vague (error range given) feedback on that progress. Perhaps the most striking example was provided in the weight loss experiment whose participants gained, on average, one pound over the course of the experiment after being given precise feedback, those given vague feedback lost nearly four pounds. Here is their abstract:
Is the eternal quest for precise information always worthwhile? Our research suggests that, at times, vagueness has its merits. Previous research has demonstrated that people prefer precise information over vague information because it gives them a sense of security and makes their environments more predictable. However, we show that the fuzzy boundaries afforded by vague information can actually help individuals perform better than can precise information. We document these findings across two laboratory studies and one quasi–field study that involved different performance-related contexts (mental acuity, physical strength, and weight loss). We argue that the malleability of vague information allows people to interpret it in the manner they desire, so that they can generate positive response expectancies and, thereby, perform better. The rigidity of precise information discourages desirable interpretations. Hence, on certain occasions, precise information is not as helpful as vague information in boosting performance.

Wednesday, July 27, 2011

Inappropriate cravings? Hold a magnet by your head!

Here's an idea for a BioTech startup!...(I'm not serious)...suggested by an article from McClemon et al. titled "Repetitive Transcranial Magnetic Stimulation of the Superior Frontal Gyrus Modulates Craving for Cigarettes."
BACKGROUND:

Previous functional magnetic resonance imaging studies have shown strong correlations between cue-elicited craving for cigarettes and activation of the superior frontal gyrus (SFG). Repetitive transcranial magnetic stimulation (rTMS) offers a noninvasive means to reversibly affect brain cortical activity, which can be applied to testing hypotheses about the causal role of SFG in modulating craving.

METHODS:

Fifteen volunteer smokers were recruited to investigate the effects of rTMS on subjective responses to smoking versus neutral cues and to controlled presentations of cigarette smoke. On different days, participants were exposed to three conditions: 1) high-frequency (10 Hz) rTMS directed at the SFG; 2) low-frequency (1 Hz) rTMS directed at the SFG; and 3) low-frequency (1 Hz) rTMS directed at the motor cortex (control condition).

RESULTS:

Craving ratings in response to smoking versus neutral cues were differentially affected by the 10-Hz versus 1-Hz SFG condition. Craving after smoking cue presentations was elevated in the 10-Hz SFG condition, whereas craving after neutral cue presentations was reduced. Upon smoking in the 10-Hz SFG condition, ratings of immediate craving reduction as well as the intensity of interoceptive airway sensations were also attenuated.

CONCLUSIONS:

These results support the view that the SFG plays a role in modulating craving reactivity; moreover, the results suggest that the SFG plays a role in both excitatory and inhibitory influences on craving, consistent with prior research demonstrating the role of the prefrontal cortex in the elicitation as well as inhibition of drug-seeking behaviors.
By the way, from Wikipedia via google images, here is the superior frontal gyrus:

Tuesday, July 26, 2011

Watching Humor in the Brain

Bekinschtein et al. look at what may be the brain correlates of "humor as a cognitive cleanup mechanism" mentioned in my June 17 post, at least in the case of jokes that depend on semantic ambiguity resolution:
What makes us laugh? One crucial component of many jokes is the disambiguation of words with multiple meanings. In this functional MRI study of normal participants, the neural mechanisms that underlie our experience of getting a joke that depends on the resolution of semantically ambiguous words were explored. Jokes that contained ambiguous words were compared with sentences that contained ambiguous words but were not funny, as well as to matched verbal jokes that did not depend on semantic ambiguity. The results confirm that both the left inferior temporal gyrus and left inferior frontal gyrus are involved in processing the semantic aspects of language comprehension, while a more widespread network that includes both of these regions and the temporoparietal junction bilaterally is involved in processing humorous verbal jokes when compared with matched nonhumorous material. In addition, hearing jokes was associated with increased activity in a network of subcortical regions, including the amygdala, the ventral striatum, and the midbrain, that have been implicated in experiencing positive reward. Moreover, activity in these regions correlated with the subjective ratings of funniness of the presented material. These results allow a more precise account of how the neural and cognitive processes that are involved in ambiguity resolu
tion contribute to the appreciation of jokes that depend on semantic ambiguity.

Monday, July 25, 2011

Confabulation

Here is an entry from Fiery Cushman on the Edge.org question "What scientific concept would improve everybody's cognitive toolkit?," on how we frequently rationalize our behavior, unaware of unconscious factors that actually guided it. Here are some clips:
We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we're doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties…The problem is that we get all of our explanations partly right, correctly identifying the conscious and deliberate causes of our behavior. Unfortunately, we mistake "party right" for "completely right", and thereby fail to recognize the equal influence of the unconscious, or to guard against it.

People make harsher moral judgments in foul-smelling rooms, reflecting the role of disgust as a moral emotion. Women are less likely to call their fathers (but equally likely to call their mothers) during the fertile phase of their menstrual cycle, reflecting a means of incest avoidance. Students indicate greater political conservatism when polled near a hand-sanitizing station during a flu epidemic, reflecting the influence of a threatening environment on ideology. They also indicate a closer bond to their mother when holding hot coffee versus iced coffee, reflecting the metaphor of a "warm" relationship.

Automatic behaviors can be remarkably organized, and even goal-driven. For example, research shows that people tend to cheat just as much as they can without realizing that they are cheating. This is a remarkable phenomenon: Part of you is deciding how much to cheat, calibrated at just the level that keeps another part of you from realizing it.

One of the ways that people pull off this trick is with innocent confabulations: When self-grading an exam, students think, "Oh, I was going to circle e, I really knew that answer!" This isn't a lie, any more than it's a lie to say you have always loved your mother (latte in hand), but don't have time to call your dad during this busy time of the month. These are just incomplete explanations, confabulations that reflect our conscious thoughts while ignoring the unconscious ones.

Perhaps you have noticed that people have an easier time sniffing out unseemly motivations for other's behavior than recognizing the same motivations for their own behavior…we jump to the conclusion that others' behaviors reflect their bad motives and poor judgment, attributing conscious choice to behaviors that may have been influenced unconsciously… we assume that our own choices were guided solely by the conscious explanations that we conjure, and reject or ignore the possibility of our own unconscious biases...By understanding confabulation we can begin to remedy both faults.

Friday, July 22, 2011

The importance of our brains’ resting state activity.

Pizoli et al find, in an open access article describing the clinical case of a young boy with epileptic encephalopathy who underwent successful corpus callosotomy surgery for treatment of drop seizures (i.e., separation of connections between the two hemispheres), that resting state brain activity (temporal synchrony across distributed brain regions termed resting-state networks that persists during waking, sleep, and anesthesia) is required for normal brain development and maintenance:
One of the most intriguing recent discoveries concerning brain function is that intrinsic neuronal activity manifests as spontaneous fluctuations of the blood oxygen level–dependent (BOLD) functional MRI signal. These BOLD fluctuations exhibit temporal synchrony within widely distributed brain regions known as resting-state networks. Resting-state networks are present in the waking state, during sleep, and under general anesthesia, suggesting that spontaneous neuronal activity plays a fundamental role in brain function. Despite its ubiquitous presence, the physiological role of correlated, spontaneous neuronal activity remains poorly understood. One hypothesis is that this activity is critical for the development of synaptic connections and maintenance of synaptic homeostasis. We had a unique opportunity to test this hypothesis in a 5-y-old boy with severe epileptic encephalopathy. The child developed marked neurologic dysfunction in association with a seizure disorder, resulting in a 1-y period of behavioral regression and progressive loss of developmental milestones. His EEG showed a markedly abnormal pattern of high-amplitude, disorganized slow activity with frequent generalized and multifocal epileptiform discharges. Resting-state functional connectivity MRI showed reduced BOLD fluctuations and a pervasive lack of normal connectivity. The child underwent successful corpus callosotomy surgery for treatment of drop seizures. Postoperatively, the patient's behavior returned to baseline, and he resumed development of new skills. The waking EEG revealed a normal background, and functional connectivity MRI demonstrated restoration of functional connectivity architecture. These results provide evidence that intrinsic, coherent neuronal signaling may be essential to the development and maintenance of the brain's functional organization.

Thursday, July 21, 2011

Pressure to conform - survey of tight and loose cultures.

Gelfand et al. have constructed a metric they term "tightness-looseness" - the extent to which societies impose social norms, and have collected data across 33 large-scale cultures from ~7,000 individuals. Their questionnaire asked people to rate the appropriateness of 12 behaviors (such as eating or crying) in 15 situations (such as being in a bank or at a party). Then, they compared the responses to an array of ecological and historical factors. From Norenzayan's summary:
...Overall, they found that societies exposed to contemporary or historical threats, such as territorial conflict, resource scarcity, or exposure to high levels of pathogens, more strictly regulate social behavior and punish deviance. These societies are also more likely to have evolved institutions that strictly regulate social norms. At the psychological level, individuals in tightly regulated societies report higher levels of self-monitoring, more intolerant attitudes toward outsiders, and paying stricter attention to time. In this multilevel analysis, ecological, historical, institutional, and psychological variables comprise a loosely integrated system that defines a culture.

These findings complement a growing literature that reveals the power of the comparative approach in explaining critically important features of human behavior. For example, research suggests that the substantial variation in religious involvement among nations can be explained, in large part, by perceived levels of security. Religion thrives when existential threats to human security, such as war or natural disaster, are rampant, and declines considerably in societies with high levels of economic development, low income inequality and infant mortality, and greater access to social safety nets.

Wednesday, July 20, 2011

The Forbidden Fruit Intuition - our inability to cope with what we know about our minds.

I've recently done a scan of old MindBlog posts, and a number of them stand out so strongly for me, that I want to have their ideas repeated, hoping repetition will aid intellectual assimilation. Here then, a post from April 2006 on Thomas Metzinger's brief essay titled "The Forbidden Fruit Intuition", in the initial post I didn't point to his first paragraph,
We all would like to believe that, ultimately, intellectual honesty is not only an expression of, but also good for your mental health. My dangerous question is if one can be intellectually honest about the issue of free will and preserve one's mental health at the same time. Behind this question lies what I call the "Forbidden Fruit Intuition": Is there a set of questions which are dangerous not on grounds of ideology or political correctness, but because the most obvious answers to them could ultimately make our conscious self-models disintegrate? Can one really believe in determinism without going insane?
Here is the original post:

I get frustrated when I try to reconcile what I know from empirical data to be true about my self (see the "I-Illusion" essay on this website) with the common sense feeling of agency and responsibility that we are share.

Our commonsense conceptions of ourselves have co-evolved over hundreds of thousands of years, along with their physiological, homeostatic, neuroendocrine, and limbic emotional correlates. This whole complex (us, that is) can be upset by facing what it can come to know to be true about the impersonal physical processes that actually run our show, finding it impossible to integrate its 'illusory' self image.

Here is a clip, and then its more extended context from the piece by Metzinger on edge.org..his response to the question "What is your dangerous idea." He frames it much better than I can. First the clip:

"I think that the irritation and deep sense of resentment surrounding public debates on the freedom of the will actually has nothing much to do with the actual options on the table. It has to do with the perfectly sensible intuition that our presently obvious answer will not only be emotionally disturbing, but ultimately impossible to integrate into our conscious self-models."

Then the more extended quotation:

"For middle-sized objects at 37° like the human brain and the human body, determinism is obviously true. The next state of the physical universe is always determined by the previous state. And given a certain brain-state plus an environment you could never have acted otherwise. A surprisingly large majority of experts in the free-will debate today accept this obvious fact...."

"Yes, you are a physically determined system. But this is not a big problem, because, under certain conditions, we may still continue to say that you are "free": all that matters is that your actions are caused by the right kinds of brain processes and that they originate in you. A physically determined system can well be sensitive to reasons and to rational arguments, to moral considerations, to questions of value and ethics, as long as all of this is appropriately wired into its brain. You can be rational, and you can be moral, as long as your brain is physically determined in the right way. You like this basic idea: physical determinism is compatible with being a free agent. You endorse a materialist philosophy of freedom as well. An intellectually honest person open to empirical data, you simply believe that something along these lines must be true.

Now you try to feel that it is true. You try to consciously experience the fact that at any given moment of your life, you could not have acted otherwise. You try to experience the fact that even your thoughts, however rational and moral, are predetermined — by something unconscious, by something you can not see. And in doing so, you start fooling around with the conscious self-model Mother Nature evolved for you with so much care and precision over millions of years: You are scratching at the user-surface of your own brain, tweaking the mouse-pointer, introspectively trying to penetrate into the operating system, attempting to make the invisible visible. You are challenging the integrity of your phenomenal self by trying to integrate your new beliefs, the neuroscientific image of man, with your most intimate, inner way of experiencing yourself. How does it feel?

I think that the irritation and deep sense of resentment surrounding public debates on the freedom of the will actually has nothing much to do with the actual options on the table. It has to do with the perfectly sensible intuition that our presently obvious answer will not only be emotionally disturbing, but ultimately impossible to integrate into our conscious self-models.

Or our societies: The robust conscious experience of free will also is a social institution, because the attribution of accountability, responsibility, etc. are the decisive building blocks for modern, open societies. And the currently obvious answer might be interpreted by many as having clearly anti-democratic implications: Making a complex society work implies controlling the behavior of millions of people; if individual human beings can control their own behavior to a much lesser degree than we have thought in the past, if bottom-up doesn't work, then it becomes tempting to control it top-down, by the state. And this is the second way in which enlightenment could devour its own children. Yes, free will truly is a dangerous question, but for different reasons than most people think. "