Tuesday, April 12, 2011

Hurt the flesh, cleanse the soul....

Here are some summary, slightly edited, clips from an interesting study by Bastian et al. (performed on the usual batch of college undergraduates, paid $10 for their participation in the study):
Pain purifies. History is replete with examples of ritualized or self-inflicted pain aimed at achieving purification...When reminded of an immoral deed, people are motivated to experience physical pain. Student participants in the study who wrote about an unethical behavior not only held their hands in ice water longer but also rated the experience as more painful than did participants who wrote about an everyday interaction. Critically, experiencing pain reduced people’s feelings of guilt, and the effect of the painful task on ratings of guilt was greater than the effect of a similar but nonpainful task.

Pain has traditionally been understood as purely physical in nature, but it is more accurate to describe it as the intersection of body, mind, and culture. People give meaning to pain, and we argue that people interpret pain within a judicial model of pain as punishment. Our results suggest that the experience of pain has psychological currency in rebalancing the scales of justice—an interpretation of pain that is analogous to notions of retributive justice. Interpreted in this way, pain has the capacity to resolve guilt.

People are socialized to understand pain within this judicial framework. Physical pain is employed as a penalty (e.g., spanking children for misbehavior), and unexplained pain is often understood as punishment from God. The judicial model is explicit in the Latin word for pain, poena, which means “to pay the penalty.” Understood this way, pain may be perceived as repayment for sin in three ways. First, pain is the embodiment of atonement. Just as physical cleansing washes away sin, physical pain is experienced as a penalty, and paying that penalty reestablishes moral purity. Second, subjecting oneself to pain communicates remorse to others (including God) and signals that one has paid for one’s sins, and this removes the threat of external punishment. Third, tolerating the punishment of pain is a test of one’s virtue, reaffirming one’s positive identity to oneself and others.

Previous work has demonstrated that giving meaning to pain affects people’s management of that pain. By introducing the judicial model of pain, we emphasize that giving meaning to pain can also affect other psychological processes. Although additional research is needed, our findings demonstrate that experiencing pain as a penalty can cause people to feel that their guilt is resolved and their soul cleansed.

Monday, April 11, 2011

Improving your cognitive toolkit - VII

Continuation of my sampling of a few of the answers to the annual question at edge.org, "What scientific concept would improve everybody's cognitive toolkit?":
Alun Anderson - Homo Dilatus
Our species might well be renamed Homo Dilatus, the procrastinating ape. Somewhere in our evolution we acquired the brain circuitry to deal with sudden crises and respond with urgent action. Steady declines and slowly developing threats are quite different. "Why act now when the future is far off," is the maxim for a species designed to deal with near-term problems and not long term uncertainties. It's a handy view of humankind which everyone who uses science to change policy should keep in their mental took kit, and one that that is greatly reinforced by the endless procrastination in tacking climate change. Cancun follows Copenhagen follows Kyoto but the more we dither and no extraordinary disaster follows, the more dithering seems just fine.

Such behaviour is not unique to climate change. It took the sinking of the Titanic to put sufficient life boats on passenger ships, the huge spill from the Amoco Cadiz to set international marine pollution rules and the Exxon Valdez disaster to drive the switch to double-hulled tankers. The same pattern is seen in the oil industry, with the Gulf spill the latest chapter in the disaster first-regulations later mindset of Homo dilatus.
Geoffrey Miller - Personality traits are continuous with mental illnesses
Our instinctive way of thinking about insanity — our intuitive psychiatry — is dead wrong... There's a scientific consensus that personality traits can be well-described by five main dimensions of variation. These "Big Five" personality traits are called openness, conscientiousness, extraversion, agreeableness, and emotional stability. The Big Five are all normally distributed in a bell curve, statistically independent of each other, genetically heritable, stable across the life-course, unconsciously judged when choosing mates or friends, and found in other species such as chimpanzees. They predict a wide range of behavior in school, work, marriage, parenting, crime, economics, and politics.

Mental disorders are often associated with maladaptive extremes of the Big Five traits. Over-conscientiousness predicts obsessive-compulsive disorder, whereas low conscientiousness predicts drug addiction and other "impulse control disorders". Low emotional stability predicts depression, anxiety, bipolar, borderline, and histrionic disorders. Low extraversion predicts avoidant and schizoid personality disorders. Low agreeableness predicts psychopathy and paranoid personality disorder. High openness is on a continuum with schizotypy and schizophrenia. Twin studies show that these links between personality traits and mental illnesses exist not just at the behavioral level, but at the genetic level. And parents who are somewhat extreme on a personality trait are much more likely to have a child with the associated mental illness.

Friday, April 08, 2011

Music and language - differing brain activations

Rogalsky et al. obtain data that fills out in much more detail a description of which brain activations overlap and which differ during the processing of speech versus music. Here is their abstract, following by a figure from the paper:
Language and music exhibit similar acoustic and structural properties, and both appear to be uniquely human. Several recent studies suggest that speech and music perception recruit shared computational systems, and a common substrate in Broca's area for hierarchical processing has recently been proposed. However, this claim has not been tested by directly comparing the spatial distribution of activations to speech and music processing within subjects. In the present study, participants listened to sentences, scrambled sentences, and novel melodies. As expected, large swaths of activation for both sentences and melodies were found bilaterally in the superior temporal lobe, overlapping in portions of auditory cortex. However, substantial nonoverlap was also found: sentences elicited more ventrolateral activation, whereas the melodies elicited a more dorsomedial pattern, extending into the parietal lobe. Multivariate pattern classification analyses indicate that even within the regions of blood oxygenation level-dependent response overlap, speech and music elicit distinguishable patterns of activation. Regions involved in processing hierarchical aspects of sentence perception were identified by contrasting sentences with scrambled sentences, revealing a bilateral temporal lobe network. Music perception showed no overlap whatsoever with this network. Broca's area was not robustly activated by any stimulus type. Overall, these findings suggest that basic hierarchical processing for music and speech recruits distinct cortical networks, neither of which involves Broca's area. We suggest that previous claims are based on data from tasks that tap higher-order cognitive processes, such as working memory and/or cognitive control, which can operate in both speech and music domains.

Figure - regions selective for speech versus music. Speech stimuli selectively activates more lateral regions in the superior temporal lobe bilaterally, while music stimuli selectively activate more medial anterior regions on the supratemporal plane and extending into the insula, primarily in the right hemisphere. (This apparently lateralized pattern for music does not mean that the right hemisphere preferentially processes music stimuli as is often assumed. An analysis in the paper also shows that music activates both hemispheres rather symmetrically; the lateralization effect is in the relative activation patterns to music versus speech.)

Thursday, April 07, 2011

Elephants know when they need a helping trunk.

From de Waal and collaborators, evidence for convergent evolution of cooperation:
Elephants are widely assumed to be among the most cognitively advanced animals, even though systematic evidence is lacking. This void in knowledge is mainly due to the danger and difficulty of submitting the largest land animal to behavioral experiments. In an attempt to change this situation, a classical 1930s cooperation paradigm commonly tested on monkeys and apes was modified by using a procedure originally designed for chimpanzees (Pan troglodytes) to measure the reactions of Asian elephants (Elephas maximus). This paradigm explores the cognition underlying coordination toward a shared goal. What do animals know or learn about the benefits of cooperation? Can they learn critical elements of a partner's role in cooperation? Whereas observations in nature suggest such understanding in nonhuman primates, experimental results have been mixed, and little evidence exists with regards to nonprimates. Here, we show that elephants can learn to coordinate with a partner in a task requiring two individuals to simultaneously pull two ends of the same rope to obtain a reward. Not only did the elephants act together, they inhibited the pulling response for up to 45 s if the arrival of a partner was delayed. They also grasped that there was no point to pulling if the partner lacked access to the rope. Such results have been interpreted as demonstrating an understanding of cooperation. Through convergent evolution, elephants may have reached a cooperative skill level on a par with that of chimpanzees.

Wednesday, April 06, 2011

The brains of experts - due to predispositions and/or training?

Here is how Golestant et al. frame their interesting study on the brains of expert Phoneticians - who typically spend one to four years of formal training learning to identify speech sounds and to transcribe them into an international phonetic alphabet. (Remember, you can view any of the brain structures they mention by simply entering their name in Google Image Search.) The work suggests that morphological brain differences at birth might well influence career choices. We tend to enjoy and get reinforcement for doing things we are good at. (I've always wondered about a brain correlate for my ability, from a young age, to sight read any piece of sheet music put in front of me.)
Expertise has been shown to have both functional and structural correlates in the human brain. For example, expert golfers show a different pattern of neural activity than novice golfers when planning shots, and London taxi drivers have a larger posterior hippocampal volume than matched controls. It can be difficult to establish, however, the extent to which these effects relate to preexisting differences between the novice and expert groups, or whether these effects mainly arise from training-induced plasticity. Here we investigate brain anatomy in expert phoneticians...to distinguish experience-dependent plasticity from brain structural features that existed before the onset of expertise training.
From their abstract:
...We found a positive correlation between the size of left pars opercularis and years of phonetic transcription training experience, illustrating how learning may affect brain structure. Phoneticians were also more likely to have multiple or split left transverse gyri in the auditory cortex than nonexpert controls, and the amount of phonetic transcription training did not predict auditory cortex morphology. The transverse gyri are thought to be established in utero; our results thus suggest that this gross morphological difference may have existed before the onset of phonetic training, and that its presence confers an advantage of sufficient magnitude to affect career choices. These results suggest complementary influences of domain-specific predispositions and experience-dependent brain malleability, influences that likely interact in determining not only how experience shapes the human brain but also why some individuals become engaged by certain fields of expertise.

Tuesday, April 05, 2011

Competitions for memory, it's stabilization, and a memory enhancer

Kuhl et al. find that the competition in the brain between old memories and new ones that are associated with the same thing (for example, an old versus a new password, or yesterday's versus today's space in the parking lot) can be observed in fMRI. They found competition between visual memories was captured in the relative degree to which target vs. competing memories were reactivated within the ventral occipitotemporal cortex. When lowered VOTC reactivation indicated that conflict between target and competing memories was high, frontoparietal mechanisms were markedly engaged, revealing specific neural mechanisms that tracked competing mnemonic evidence.

In another study on memory Diekelmann et al. show that memory reactivation has opposing effects on memory stability during wakefulness and sleep. Reactivation during slow-wave sleep following learning can stabilize memories. Reactivation during wakefulness has the opposite effect, rendering memories labile and susceptible to modest modification.

Finally Benedict Carey points to a study by Shema et al. showing that increasing levels of a brain enzyme (a protein kinase C isoform) involved in memory formation enhances long term memory. Also, Chen et al. show that injections of a different protein, a growth factor involved in memory formation (insulin-like growth factor II) can have the same effect.

Monday, April 04, 2011

Increasing the viewed size of a painful body part reduces the pain

Here's an interesting and useful bit from Mancini et al.:
Pain is a complex subjective experience that is shaped by numerous contextual factors. For example, simply viewing the body reduces the reported intensity of acute physical pain. In this study, we investigated whether this visually induced analgesia is modulated by the visual size of the stimulated body part. We measured contact heat-pain thresholds while participants viewed either their own hand or a neutral object in three size conditions: reduced, actual size, or enlarged. Vision of the body was analgesic, increasing heat-pain thresholds by an average of 3.2 °C. We further found that visual enlargement of the viewed hand enhanced analgesia, whereas visual reduction of the hand decreased analgesia. These results demonstrate that pain perception depends on multisensory representations of the body and that visual distortions of body size modulate sensory components of pain.

Saturday, April 02, 2011

Dynamic Views of Mindblog

I just pulled up the Blogger edit postings page for this blog and up pops a message "Did you know that...."

Turns out they have now added "Dynamic Views" - different ways of viewing a blog (flipcard, mosaic, sidebar, and timeslide...snapshot does not work for MindBlog) Click on the Dynamic Views of MindBlog link at the upper right to view several different viewing options.

Friday, April 01, 2011

A bad taste in the mouth - more on embodied cognition and emotion

Eskine et al. provide yet another example of how emotion induced by a physical stimulus can influence a moral stance. (Previous mindblog postings have noted this effect for hard/soft or rough/smooth surfaces, hot/cold temperature stimuli, clean/dirty smells or visual images, etc.) The experimental subjects were the usual captive college psychology course undergraduates (54 of them in this case):
Can sweet-tasting substances trigger kind, favorable judgments about other people? What about substances that are disgusting and bitter? Various studies have linked physical disgust to moral disgust, but despite the rich and sometimes striking findings these studies have yielded, no research has explored morality in conjunction with taste, which can vary greatly and may differentially affect cognition. The research reported here tested the effects of taste perception on moral judgments. After consuming a sweet beverage, a bitter beverage, or water, participants rated a variety of moral transgressions. Results showed that taste perception significantly affected moral judgments, such that physical disgust (induced via a bitter taste) elicited feelings of moral disgust. Further, this effect was more pronounced in participants with politically conservative views than in participants with politically liberal views. Taken together, these differential findings suggest that embodied gustatory experiences may affect moral processing more than previously thought.

Thursday, March 31, 2011

MindBlog is on the road.

This is just a note that my seasonal migration from Fort Lauderdale, FL., back to the Univ. of Wisconsin in Madison, WI., starts today as I pack my two Abyssinian cats in the car and drive first to Austin, TX. to visit my son and his wife who live in the family house I grew up in. This Sunday I will be giving a recital of Fantasies for the piano, using the Steinway B at a hall at Westminster Manor, where my parents spent their final years. Then a week or so later, cats and I continue the trip to Madison. (Any blog readers who are in the Austin area and might wish to hear the music are welcome to email me. Program: Haydn - Fantasia in C major; Mozart - Fantasia in C Minor; Chopin - Fantasy in F minor; Liszt - Années de pèlerinage - Vallee d'Obermann; Debussy - Estampes III. Jardins sous la pluie).

Blocking arthritis pain in the brain.

Understanding how inflammation works is becoming increasingly urgent to aging persons (like myself) who view with alarm the increasing reactivity of their innate immune system that can cause arthritic flare-ups, or autoimmune and autoinflammatory diseases such as chronic rheumatoid arthritis. Diamond and Tracey do a brief review of interesting work by Hess et al. showing that that patients with rheumatoid arthritis who receive inhibitors of TNF, a major inflammatory cytokine, develop significant changes in brain activity before resolution of inflammation in the affected joints. From their review:
During the first century, the Roman physician Cornelius Celsus defined four cardinal signs of inflammation: redness, swelling, heat, and pain. These signs and symptoms occur during infection by invasive pathogens or as a consequence of trauma. Today, we understand the molecular basis of these physiological responses as mediated by cytokines and other factors produced by cells of the innate immune system. Cytokines are both necessary and sufficient to cause pathophysiological alterations manifested as the four cardinal signs. Importantly, this knowledge has enabled the development of highly selective therapeutical agents that target individual cytokines to prevent or reverse inflammation. For example, selective inhibitors of TNF, a major inflammatory cytokine, have revolutionized the therapy of rheumatoid arthritis, inflammatory bowel disease, and other autoimmune and autoinflammatory diseases affecting millions worldwide. Now, in PNAS, Hess et al. use functional MRI to monitor brain activity and report that patients with rheumatoid arthritis who receive anti-TNF develop significant changes in brain activity before resolution of inflammation in the affected joints.

To accomplish this, the authors measured blood oxygen level-dependent (BOLD) signals in the brain after compressing the metacarpal phalangeal joints of the arthritic hand. They observe enhanced activity in the brain regions associated with pain perception, including the thalamus, somatosensory cortex, and limbic system, regions known to process body sensations and emotions associated with the pain experience ( 1). Brain activity was significantly reduced within 24 h after treatment with TNF inhibitors, a time frame that preceded any observable evidence of reduced signs of inflammation in affected joints. Clinical composite scores, comprising measurements of C-reactive protein, a circulating marker of inflammation severity, were not improved until after 24 h. This suggests that selective inhibition of TNF has a primary early effect on the nervous system pain centers.

Wednesday, March 30, 2011

Sleep deprivation biases economic risk-taking.

Venkatraman et al. make these fascinating observations:
A single night of sleep deprivation (SD) evoked a strategy shift during risky decision making such that healthy human volunteers moved from defending against losses to seeking increased gains. This change in economic preferences was correlated with the magnitude of an SD-driven increase in ventromedial prefrontal activation as well as by an SD-driven decrease in anterior insula activation during decision making. Analogous changes were observed during receipt of reward outcomes: elevated activation to gains in ventromedial prefrontal cortex and ventral striatum, but attenuated anterior insula activation following losses. Finally, the observed shift in economic preferences was not correlated with change in psychomotor vigilance. These results suggest that a night of total sleep deprivation affects the neural mechanisms underlying economic preferences independent of its effects on vigilant attention.

Tuesday, March 29, 2011

A revolution in evolution - cooperation

Milinski reviews Martin Novak's new book "SuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed," written with journalist Roger Highfield. (This post is another instance of my passing on some information on a book that I would like very much to read, but never will find time for...).
...evolutionary theorist Martin Nowak sees cooperation as the master architect of evolution. He believes that next to mutation and selection, cooperation is the driving force at every level, from the primordial soup to cells, organisms, societies and even galaxies.

Game theory is central to Nowak's work and the book highlights five ways to work together for mutual benefit: direct reciprocity, indirect reciprocity, spatial games, group or multilevel selection and kin selection. Direct reciprocity is the tit-for-tat exchange of resources...Nowak believes that indirect reciprocity, where I help you and someone else helps me, is the most important mechanism driving human sociality. It enforces the power of reputation, gained by helping or refusing help, which is spread through gossip, thus selecting in evolutionary terms for sophisticated language...Cooperators can prevail through exchanges that are played out across and between networks and clusters of individuals...Multilevel or group selection follows among communities that are small, numerous and isolated
Novak shows where the experts disagree, and questions the theoretical basis of kin selection, or inclusive fitness theory. He offers:
...a new model for the evolution of sociality, in which relatedness...is a consequence rather than the cause of social behaviour. By assuming only one mutation — one that causes offspring to stay in the nest rather than leave — he claims to explain why progeny happen to be around to help their related mother.
A massive amount of data supporting the idea of kin selection has accumulated, however, and Novak actually says “kin selection is a valid mechanism if properly formulated.”

Monday, March 28, 2011

Experimental Philosophy addresses Free Will vs. Determinism

Shaun Nichols has done an interesting essay on the problem of free will, and Tierney offers a summary. In all cultures people tend to reject the notion that they live in a deterministic world without free will. From Tierney's review:
regardless of whether free will exists, our society depends on everyone’s believing it does...it's adaptive for societies and individuals to hold a belief in free will, as it helps people adhere to cultural codes of conduct that portend healthy, wealthy and happy life outcomes...The benefits of this belief have been demonstrated in research showing that when people doubt free will, they do worse at their jobs and are less honest.
The article and review note an interesting experiment in which people are asked to judge the moral responsibility of Mark, who cheats a bit on his taxes, and Bill, who falls in love with his secretary and murders his wife and kids to be with her. Most people cut Mark some slack but believe Bill fully responsible for his crime. The inconsistency makes sense if threat to social order is being factored into judging moral responsiblity.  Again, from Tierney:
At an abstract level, people seem to be what philosophers call incompatibilists: those who believe free will is incompatible with determinism. If everything that happens is determined by what happened before, it can seem only logical to conclude you can’t be morally responsible for your next action.

But there is also a school of philosophers — in fact, perhaps the majority school — who consider free will compatible with their definition of determinism. These compatibilists believe that we do make choices, even though these choices are determined by previous events and influences. In the words of Arthur Schopenhauer, “Man can do what he wills, but he cannot will what he wills.”

Does that sound confusing — or ridiculously illogical? Compatibilism isn’t easy to explain. But it seems to jibe with our gut instinct that Bill is morally responsible even though he’s living in a deterministic universe. Dr. Nichols suggests that his experiment with Mark and Bill shows that in our abstract brains we’re incompatibilists, but in our hearts we’re compatibilists.

“This would help explain the persistence of the philosophical dispute over free will and moral responsibility,” Dr. Nichols writes in Science. “Part of the reason that the problem of free will is so resilient is that each philosophical position has a set of psychological mechanisms rooting for it.”

Friday, March 25, 2011

Us vs. Them - what shapes the urge to harm rivals?

Cikara et al. make some interesting observations on intergroup competition, using avid fans of sports teams as their experimental subjects:
Intergroup competition makes social identity salient, which in turn affects how people respond to competitors’ hardships. The failures of an in-group member are painful, whereas those of a rival out-group member may give pleasure—a feeling that may motivate harming rivals. The present study examined whether valuation-related neural responses to rival groups’ failures correlate with likelihood of harming individuals associated with those rivals. Avid fans of the Red Sox and Yankees teams viewed baseball plays while undergoing functional magnetic resonance imaging. Subjectively negative outcomes (failure of the favored team or success of the rival team) activated anterior cingulate cortex and insula, whereas positive outcomes (success of the favored team or failure of the rival team, even against a third team) activated ventral striatum. The ventral striatum effect, associated with subjective pleasure, also correlated with self-reported likelihood of aggressing against a fan of the rival team (controlling for general aggression). Outcomes of social group competition can directly affect primary reward-processing neural systems, which has implications for intergroup harm.

Thursday, March 24, 2011

The symphony of trading.

Uzzi and colleagues have made an interesting observation. You might think that a gaggle of financial traders on a large exchange floor, who make on average about 80 trades a day, would collectively generate orders with no particular time structure. A 7-hour working day is roughly 25,000 seconds, so the chance of one employee's 80 trades randomly synchronizing with any of his colleague's is small. Uzzi's group, to the contrary, found that up to 60% of all employees were trading in sync at any one second. What's more, the individual employees tended to make more money during these harmonious bursts. Here is their abstract:
Successful animal systems often manage risk through synchronous behavior that spontaneously arises without leadership. In critical human systems facing risk, such as financial markets or military operations, our understanding of the benefits associated with synchronicity is nascent but promising. Building on previous work illuminating commonalities between ecological and human systems, we compare the activity patterns of individual financial traders with the simultaneous activity of other traders—an individual and spontaneous characteristic we call synchronous trading. Additionally, we examine the association of synchronous trading with individual performance and communication patterns. Analyzing empirical data on day traders’ second-to-second trading and instant messaging, we find that the higher the traders’ synchronous trading is, the less likely they are to lose money at the end of the day. We also find that the daily instant messaging patterns of traders are closely associated with their level of synchronous trading. This result suggests that synchronicity and vanguard technology may help traders cope with risky decisions in complex systems and may furnish unique prospects for achieving collective and individual goals.

Wednesday, March 23, 2011

A new view of early human social evolution

I remember firmly taking away the message from Jared Diamond's first major book ("The Third Chimpanzee: The Evolution and Future of the Human Animal"), the message that early human tribes were bound by kinship (kin selection) as the main motive for cooperation with the group, and that human tribes (like chimpanzee tribes) were antagonistic, so that the most likely outcome of a meeting between males of two different tribes would be a battle. Not, it now turns out, if that male is your brother or cousin. Because humans lived as foragers for 95% of our species’ history, Hill et al. analyzed co-residence patterns among 32 present-day foraging societies. They found that the members of a band are not highly related. Both young males and young females disperse to other groups (in Chimps, only females disperse). And, the emergence of a pair bonding between males and females apparently has allowed people to recognize their relatives, something chimps can do only to a limited extent. When family members disperse to other bands, they are recognized and neighboring bands are more likely to cooperate instead of fighting to the death as chimp groups do. The new view would be that cooperative behavior, as distinct from the fierce aggression between chimp groups, was the turning point that shaped human evolution.
Here is the Hill et al. abstract:
Contemporary humans exhibit spectacular biological success derived from cumulative culture and cooperation. The origins of these traits may be related to our ancestral group structure. Because humans lived as foragers for 95% of our species’ history, we analyzed co-residence patterns among 32 present-day foraging societies (total n = 5067 individuals, mean experienced band size = 28.2 adults). We found that hunter-gatherers display a unique social structure where (i) either sex may disperse or remain in their natal group, (ii) adult brothers and sisters often co-reside, and (iii) most individuals in residential groups are genetically unrelated. These patterns produce large interaction networks of unrelated adults and suggest that inclusive fitness cannot explain extensive cooperation in hunter-gatherer bands. However, large social networks may help to explain why humans evolved capacities for social learning that resulted in cumulative culture.
A brief review of this work by Chapais asks the question:
...what “cognitive prerequisites” were necessary for social groups to act as individual units and coordinate their actions in relation to other units? Did hominins, for example, require a theory of mind (the attribution of mental states to others) and shared intentionality (the recognition that I and others act as a collective working toward the same goal) (10) to achieve that level of cooperation?

Tuesday, March 22, 2011

How to grow a human mind...

Tennenbaum et al. offer an utterly fascinating review of attempts to understand cognitive development by reverse engineering. They offer a simple description of Bayesian or probabilistic approaches that even I can (finally) begin to understand. They state the problem:
For scientists studying how humans come to understand their world, the central challenge is this: How do our minds get so much from so little? We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.
Here are several clips from the article (I can send a PDF of the whole article to interested readers). They start with an illustration (click to enlarge):


Figure Legend:
Human children learning names for object concepts routinely make strong generalizations from just a few examples. The same processes of rapid generalization can be studied in adults learning names for novel objects created with computer graphics. (A) Given these alien objects and three examples (boxed in red) of “tufas” (a word in the alien language), which other objects are tufas? Almost everyone selects just the objects boxed in gray. (B) Learning names for categories can be modeled as Bayesian inference over a tree-structured domain representation. Objects are placed at the leaves of the tree, and hypotheses about categories that words could label correspond to different branches. Branches at different depths pick out hypotheses at different levels of generality (e.g., Clydesdales, draft horses, horses, animals, or living things). Priors are defined on the basis of branch length, reflecting the distinctiveness of categories. Likelihoods assume that examples are drawn randomly from the branch that the word labels, favoring lower branches that cover the examples tightly; this captures the sense of suspicious coincidence when all examples of a word cluster in the same part of the tree. Combining priors and likelihoods yields posterior probabilities that favor generalizing across the lowest distinctive branch that spans all the observed examples (boxed in gray).

“Bayesian” or “probabilistic” are merely placeholders for a set of interrelated principles and theoretical claims. The key ideas can be thought of as proposals for how to answer three central questions:
1) How does abstract knowledge guide learning and inference from sparse data?
2) What forms does abstract knowledge take, across different domains and tasks?
3) How is abstract knowledge itself acquired?

At heart, Bayes’s rule is simply a tool for answering question 1: How does abstract knowledge guide inference from incomplete data? Abstract knowledge is encoded in a probabilistic generative model, a kind of mental model that describes the causal processes in the world giving rise to the learner’s observations as well as unobserved or latent variables that support effective prediction and action if the learner can infer their hidden state. Generative models must be probabilistic to handle the learner’s uncertainty about the true states of latent variables and the true causal processes at work. A generative model is abstract in two senses: It describes not only the specific situation at hand, but also a broader class of situations over which learning should generalize, and it captures in parsimonious form the essential world structure that causes learners’ observations and makes generalization possible.

Bayesian inference gives a rational framework for updating beliefs about latent variables in generative models given observed data. Background knowledge is encoded through a constrained space of hypotheses H about possible values for the latent variables, candidate world structures that could explain the observed data. Finer-grained knowledge comes in the “prior probability” P(h), the learner’s degree of belief in a specific hypothesis h prior to (or independent of) the observations. Bayes’s rule updates priors to “posterior probabilities” P(h|d) conditional on the observed data d:


The posterior probability is proportional to the product of the prior probability and the likelihood P(d|h), measuring how expected the data are under hypothesis h, relative to all other hypotheses h′ in H.

To illustrate Bayes’s rule in action, suppose we observe John coughing (d), and we consider three hypotheses as explanations: John has h1, a cold; h2, lung disease; or h3, heartburn. Intuitively only h1 seems compelling. Bayes’s rule explains why. The likelihood favors h1 and h2 over h3: only colds and lung disease cause coughing and thus elevate the probability of the data above baseline. The prior, in contrast, favors h1 and h3 over h2: Colds and heartburn are much more common than lung disease. Bayes’s rule weighs hypotheses according to the product of priors and likelihoods and so yields only explanations like h1 that score highly on both terms.

The same principles can explain how people learn from sparse data. In concept learning, the data might correspond to several example objects (Fig. 1) and the hypotheses to possible extensions of the concept. Why, given three examples of different kinds of horses, would a child generalize the word “horse” to all and only horses (h1)? Why not h2, “all horses except Clydesdales”; h3, “all animals”; or any other rule consistent with the data? Likelihoods favor the more specific patterns, h1 and h2; it would be a highly suspicious coincidence to draw three random examples that all fall within the smaller sets h1 or h2 if they were actually drawn from the much larger h3. The prior favors h1 and h3, because as more coherent and distinctive categories, they are more likely to be the referents of common words in language. Only h1 scores highly on both terms. Likewise, in causal learning, the data could be co-occurences between events; the hypotheses, possible causal relations linking the events. Likelihoods favor causal links that make the co-occurence more probable, whereas priors favor links that fit with our background knowledge of what kinds of events are likely to cause which others; for example, a disease (e.g., cold) is more likely to cause a symptom (e.g., coughing) than the other way around.
The authors continue by offering examples of hierarchical Bayesian models with different graphical matrices, and then argue that the Bayesian approach brings us closer to understanding cognition that older connectionist or neural network models.
...the Bayesian approach lets us move beyond classic either-or dichotomies that have long shaped and limited debates in cognitive science: “empiricism versus nativism,” “domain-general versus domain-specific,” “logic versus probability,” “symbols versus statistics.” Instead we can ask harder questions of reverse-engineering, with answers potentially rich enough to help us build more humanlike AI systems.

Monday, March 21, 2011

A new blog - David Brooks' psychology blog.

I just got around to having a look at David Brooks' relatively new psychology blog, which has some good stuff. (I spend virtually no time looking at other blogs, not having enough time to even do this one as well as I would like. Also, blogs tend to get into recycling each other, as if taking in each other's laundry).

PLEASE tell me that Brooks has a research staff looking up this stuff for him. If he is actually doing this himself, in addition to writing two NYTimes Op-Ed pieces a week, doing frequent lectures and TV appearance, carrying on several online dialogues, promoting his new book, he is a bloody superhuman....

Chaos and complexity in financial markets and nuclear meltdowns

I trust that my friend and colleague John Young will not mind my passing on his email to the Chaos and Complexity Seminar group to which we both belong at the Univ. of Wisconsin:
The NYT article "Derivatives, as Accused by Buffett"(Here is PDF of Buffett's testimony before the Financial Crisis Inquiry Commission) has words that translate into our lingo of Chaos (produced by strong signals interacting nonlinearly) and Complexity (many coupled degrees of freedom).  Excerpts:

Complexity
The problems arise, Mr. Buffett said, when a bank’s exposure to derivatives balloons to grand proportions and uninformed investors start using them. It “doesn’t make much difference if it’s, you know, one guy rolling dice against another, and they’re doing $5 a throw.

...and Nonlinearity
But it makes a lot of difference when you get into big numbers.” What worries him most is the big financial institutions that have millions of contracts. “If I look at JPMorgan, I see two trillion in receivables, two trillion in payables, a trillion and seven netted off on each side and $300 billion remaining, maybe $200 billion collateralized,” he said, walking through his thinking.

...and High Amplitude Chaos
“That’s all fine. But I don’t know what discontinuities are going to do to those numbers overnight if there’s a major nuclear, chemical or biological terrorist action that really is disruptive to the whole financial system.”

And, Floyd Norris offers an  interesting article that expands on this last point in last Friday's NYTimes, noting how there was general acceptance of the idea that regulators had developed sophisticated risk models to prevent a disaster in both the financial and nuclear power industries. They both were wrong.