This is just a note that my seasonal migration from Fort Lauderdale, FL., back to the Univ. of Wisconsin in Madison, WI., starts today as I pack my two Abyssinian cats in the car and drive first to Austin, TX. to visit my son and his wife who live in the family house I grew up in. This Sunday I will be giving a recital of Fantasies for the piano, using the Steinway B at a hall at Westminster Manor, where my parents spent their final years. Then a week or so later, cats and I continue the trip to Madison. (Any blog readers who are in the Austin area and might wish to hear the music are welcome to email me. Program: Haydn - Fantasia in C major; Mozart - Fantasia in C Minor; Chopin - Fantasy in F minor; Liszt - Années de pèlerinage - Vallee d'Obermann; Debussy - Estampes III. Jardins sous la pluie).
Thursday, March 31, 2011
Understanding how inflammation works is becoming increasingly urgent to aging persons (like myself) who view with alarm the increasing reactivity of their innate immune system that can cause arthritic flare-ups, or autoimmune and autoinflammatory diseases such as chronic rheumatoid arthritis. Diamond and Tracey do a brief review of interesting work by Hess et al. showing that that patients with rheumatoid arthritis who receive inhibitors of TNF, a major inflammatory cytokine, develop significant changes in brain activity before resolution of inflammation in the affected joints. From their review:
During the first century, the Roman physician Cornelius Celsus defined four cardinal signs of inflammation: redness, swelling, heat, and pain. These signs and symptoms occur during infection by invasive pathogens or as a consequence of trauma. Today, we understand the molecular basis of these physiological responses as mediated by cytokines and other factors produced by cells of the innate immune system. Cytokines are both necessary and sufficient to cause pathophysiological alterations manifested as the four cardinal signs. Importantly, this knowledge has enabled the development of highly selective therapeutical agents that target individual cytokines to prevent or reverse inflammation. For example, selective inhibitors of TNF, a major inflammatory cytokine, have revolutionized the therapy of rheumatoid arthritis, inflammatory bowel disease, and other autoimmune and autoinflammatory diseases affecting millions worldwide. Now, in PNAS, Hess et al. use functional MRI to monitor brain activity and report that patients with rheumatoid arthritis who receive anti-TNF develop significant changes in brain activity before resolution of inflammation in the affected joints.
To accomplish this, the authors measured blood oxygen level-dependent (BOLD) signals in the brain after compressing the metacarpal phalangeal joints of the arthritic hand. They observe enhanced activity in the brain regions associated with pain perception, including the thalamus, somatosensory cortex, and limbic system, regions known to process body sensations and emotions associated with the pain experience ( 1). Brain activity was significantly reduced within 24 h after treatment with TNF inhibitors, a time frame that preceded any observable evidence of reduced signs of inflammation in affected joints. Clinical composite scores, comprising measurements of C-reactive protein, a circulating marker of inflammation severity, were not improved until after 24 h. This suggests that selective inhibition of TNF has a primary early effect on the nervous system pain centers.
Wednesday, March 30, 2011
Venkatraman et al. make these fascinating observations:
A single night of sleep deprivation (SD) evoked a strategy shift during risky decision making such that healthy human volunteers moved from defending against losses to seeking increased gains. This change in economic preferences was correlated with the magnitude of an SD-driven increase in ventromedial prefrontal activation as well as by an SD-driven decrease in anterior insula activation during decision making. Analogous changes were observed during receipt of reward outcomes: elevated activation to gains in ventromedial prefrontal cortex and ventral striatum, but attenuated anterior insula activation following losses. Finally, the observed shift in economic preferences was not correlated with change in psychomotor vigilance. These results suggest that a night of total sleep deprivation affects the neural mechanisms underlying economic preferences independent of its effects on vigilant attention.
Tuesday, March 29, 2011
Milinski reviews Martin Novak's new book "SuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed," written with journalist Roger Highfield. (This post is another instance of my passing on some information on a book that I would like very much to read, but never will find time for...).
...evolutionary theorist Martin Nowak sees cooperation as the master architect of evolution. He believes that next to mutation and selection, cooperation is the driving force at every level, from the primordial soup to cells, organisms, societies and even galaxies.Novak shows where the experts disagree, and questions the theoretical basis of kin selection, or inclusive fitness theory. He offers:
Game theory is central to Nowak's work and the book highlights five ways to work together for mutual benefit: direct reciprocity, indirect reciprocity, spatial games, group or multilevel selection and kin selection. Direct reciprocity is the tit-for-tat exchange of resources...Nowak believes that indirect reciprocity, where I help you and someone else helps me, is the most important mechanism driving human sociality. It enforces the power of reputation, gained by helping or refusing help, which is spread through gossip, thus selecting in evolutionary terms for sophisticated language...Cooperators can prevail through exchanges that are played out across and between networks and clusters of individuals...Multilevel or group selection follows among communities that are small, numerous and isolated
...a new model for the evolution of sociality, in which relatedness...is a consequence rather than the cause of social behaviour. By assuming only one mutation — one that causes offspring to stay in the nest rather than leave — he claims to explain why progeny happen to be around to help their related mother.A massive amount of data supporting the idea of kin selection has accumulated, however, and Novak actually says “kin selection is a valid mechanism if properly formulated.”
Monday, March 28, 2011
Shaun Nichols has done an interesting essay on the problem of free will, and Tierney offers a summary. In all cultures people tend to reject the notion that they live in a deterministic world without free will. From Tierney's review:
regardless of whether free will exists, our society depends on everyone’s believing it does...it's adaptive for societies and individuals to hold a belief in free will, as it helps people adhere to cultural codes of conduct that portend healthy, wealthy and happy life outcomes...The benefits of this belief have been demonstrated in research showing that when people doubt free will, they do worse at their jobs and are less honest.The article and review note an interesting experiment in which people are asked to judge the moral responsibility of Mark, who cheats a bit on his taxes, and Bill, who falls in love with his secretary and murders his wife and kids to be with her. Most people cut Mark some slack but believe Bill fully responsible for his crime. The inconsistency makes sense if threat to social order is being factored into judging moral responsiblity. Again, from Tierney:
At an abstract level, people seem to be what philosophers call incompatibilists: those who believe free will is incompatible with determinism. If everything that happens is determined by what happened before, it can seem only logical to conclude you can’t be morally responsible for your next action.
But there is also a school of philosophers — in fact, perhaps the majority school — who consider free will compatible with their definition of determinism. These compatibilists believe that we do make choices, even though these choices are determined by previous events and influences. In the words of Arthur Schopenhauer, “Man can do what he wills, but he cannot will what he wills.”
Does that sound confusing — or ridiculously illogical? Compatibilism isn’t easy to explain. But it seems to jibe with our gut instinct that Bill is morally responsible even though he’s living in a deterministic universe. Dr. Nichols suggests that his experiment with Mark and Bill shows that in our abstract brains we’re incompatibilists, but in our hearts we’re compatibilists.
“This would help explain the persistence of the philosophical dispute over free will and moral responsibility,” Dr. Nichols writes in Science. “Part of the reason that the problem of free will is so resilient is that each philosophical position has a set of psychological mechanisms rooting for it.”
Friday, March 25, 2011
Cikara et al. make some interesting observations on intergroup competition, using avid fans of sports teams as their experimental subjects:
Intergroup competition makes social identity salient, which in turn affects how people respond to competitors’ hardships. The failures of an in-group member are painful, whereas those of a rival out-group member may give pleasure—a feeling that may motivate harming rivals. The present study examined whether valuation-related neural responses to rival groups’ failures correlate with likelihood of harming individuals associated with those rivals. Avid fans of the Red Sox and Yankees teams viewed baseball plays while undergoing functional magnetic resonance imaging. Subjectively negative outcomes (failure of the favored team or success of the rival team) activated anterior cingulate cortex and insula, whereas positive outcomes (success of the favored team or failure of the rival team, even against a third team) activated ventral striatum. The ventral striatum effect, associated with subjective pleasure, also correlated with self-reported likelihood of aggressing against a fan of the rival team (controlling for general aggression). Outcomes of social group competition can directly affect primary reward-processing neural systems, which has implications for intergroup harm.
Thursday, March 24, 2011
Uzzi and colleagues have made an interesting observation. You might think that a gaggle of financial traders on a large exchange floor, who make on average about 80 trades a day, would collectively generate orders with no particular time structure. A 7-hour working day is roughly 25,000 seconds, so the chance of one employee's 80 trades randomly synchronizing with any of his colleague's is small. Uzzi's group, to the contrary, found that up to 60% of all employees were trading in sync at any one second. What's more, the individual employees tended to make more money during these harmonious bursts. Here is their abstract:
Successful animal systems often manage risk through synchronous behavior that spontaneously arises without leadership. In critical human systems facing risk, such as financial markets or military operations, our understanding of the benefits associated with synchronicity is nascent but promising. Building on previous work illuminating commonalities between ecological and human systems, we compare the activity patterns of individual financial traders with the simultaneous activity of other traders—an individual and spontaneous characteristic we call synchronous trading. Additionally, we examine the association of synchronous trading with individual performance and communication patterns. Analyzing empirical data on day traders’ second-to-second trading and instant messaging, we find that the higher the traders’ synchronous trading is, the less likely they are to lose money at the end of the day. We also find that the daily instant messaging patterns of traders are closely associated with their level of synchronous trading. This result suggests that synchronicity and vanguard technology may help traders cope with risky decisions in complex systems and may furnish unique prospects for achieving collective and individual goals.
Wednesday, March 23, 2011
I remember firmly taking away the message from Jared Diamond's first major book ("The Third Chimpanzee: The Evolution and Future of the Human Animal"), the message that early human tribes were bound by kinship (kin selection) as the main motive for cooperation with the group, and that human tribes (like chimpanzee tribes) were antagonistic, so that the most likely outcome of a meeting between males of two different tribes would be a battle. Not, it now turns out, if that male is your brother or cousin. Because humans lived as foragers for 95% of our species’ history, Hill et al. analyzed co-residence patterns among 32 present-day foraging societies. They found that the members of a band are not highly related. Both young males and young females disperse to other groups (in Chimps, only females disperse). And, the emergence of a pair bonding between males and females apparently has allowed people to recognize their relatives, something chimps can do only to a limited extent. When family members disperse to other bands, they are recognized and neighboring bands are more likely to cooperate instead of fighting to the death as chimp groups do. The new view would be that cooperative behavior, as distinct from the fierce aggression between chimp groups, was the turning point that shaped human evolution.
Here is the Hill et al. abstract:
Contemporary humans exhibit spectacular biological success derived from cumulative culture and cooperation. The origins of these traits may be related to our ancestral group structure. Because humans lived as foragers for 95% of our species’ history, we analyzed co-residence patterns among 32 present-day foraging societies (total n = 5067 individuals, mean experienced band size = 28.2 adults). We found that hunter-gatherers display a unique social structure where (i) either sex may disperse or remain in their natal group, (ii) adult brothers and sisters often co-reside, and (iii) most individuals in residential groups are genetically unrelated. These patterns produce large interaction networks of unrelated adults and suggest that inclusive fitness cannot explain extensive cooperation in hunter-gatherer bands. However, large social networks may help to explain why humans evolved capacities for social learning that resulted in cumulative culture.A brief review of this work by Chapais asks the question:
...what “cognitive prerequisites” were necessary for social groups to act as individual units and coordinate their actions in relation to other units? Did hominins, for example, require a theory of mind (the attribution of mental states to others) and shared intentionality (the recognition that I and others act as a collective working toward the same goal) (10) to achieve that level of cooperation?
Tuesday, March 22, 2011
Tennenbaum et al. offer an utterly fascinating review of attempts to understand cognitive development by reverse engineering. They offer a simple description of Bayesian or probabilistic approaches that even I can (finally) begin to understand. They state the problem:
For scientists studying how humans come to understand their world, the central challenge is this: How do our minds get so much from so little? We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.Here are several clips from the article (I can send a PDF of the whole article to interested readers). They start with an illustration (click to enlarge):
Figure Legend:The authors continue by offering examples of hierarchical Bayesian models with different graphical matrices, and then argue that the Bayesian approach brings us closer to understanding cognition that older connectionist or neural network models.
Human children learning names for object concepts routinely make strong generalizations from just a few examples. The same processes of rapid generalization can be studied in adults learning names for novel objects created with computer graphics. (A) Given these alien objects and three examples (boxed in red) of “tufas” (a word in the alien language), which other objects are tufas? Almost everyone selects just the objects boxed in gray. (B) Learning names for categories can be modeled as Bayesian inference over a tree-structured domain representation. Objects are placed at the leaves of the tree, and hypotheses about categories that words could label correspond to different branches. Branches at different depths pick out hypotheses at different levels of generality (e.g., Clydesdales, draft horses, horses, animals, or living things). Priors are defined on the basis of branch length, reflecting the distinctiveness of categories. Likelihoods assume that examples are drawn randomly from the branch that the word labels, favoring lower branches that cover the examples tightly; this captures the sense of suspicious coincidence when all examples of a word cluster in the same part of the tree. Combining priors and likelihoods yields posterior probabilities that favor generalizing across the lowest distinctive branch that spans all the observed examples (boxed in gray).
“Bayesian” or “probabilistic” are merely placeholders for a set of interrelated principles and theoretical claims. The key ideas can be thought of as proposals for how to answer three central questions:
1) How does abstract knowledge guide learning and inference from sparse data?
2) What forms does abstract knowledge take, across different domains and tasks?
3) How is abstract knowledge itself acquired?
At heart, Bayes’s rule is simply a tool for answering question 1: How does abstract knowledge guide inference from incomplete data? Abstract knowledge is encoded in a probabilistic generative model, a kind of mental model that describes the causal processes in the world giving rise to the learner’s observations as well as unobserved or latent variables that support effective prediction and action if the learner can infer their hidden state. Generative models must be probabilistic to handle the learner’s uncertainty about the true states of latent variables and the true causal processes at work. A generative model is abstract in two senses: It describes not only the specific situation at hand, but also a broader class of situations over which learning should generalize, and it captures in parsimonious form the essential world structure that causes learners’ observations and makes generalization possible.
Bayesian inference gives a rational framework for updating beliefs about latent variables in generative models given observed data. Background knowledge is encoded through a constrained space of hypotheses H about possible values for the latent variables, candidate world structures that could explain the observed data. Finer-grained knowledge comes in the “prior probability” P(h), the learner’s degree of belief in a specific hypothesis h prior to (or independent of) the observations. Bayes’s rule updates priors to “posterior probabilities” P(h|d) conditional on the observed data d:
The posterior probability is proportional to the product of the prior probability and the likelihood P(d|h), measuring how expected the data are under hypothesis h, relative to all other hypotheses h′ in H.
To illustrate Bayes’s rule in action, suppose we observe John coughing (d), and we consider three hypotheses as explanations: John has h1, a cold; h2, lung disease; or h3, heartburn. Intuitively only h1 seems compelling. Bayes’s rule explains why. The likelihood favors h1 and h2 over h3: only colds and lung disease cause coughing and thus elevate the probability of the data above baseline. The prior, in contrast, favors h1 and h3 over h2: Colds and heartburn are much more common than lung disease. Bayes’s rule weighs hypotheses according to the product of priors and likelihoods and so yields only explanations like h1 that score highly on both terms.
The same principles can explain how people learn from sparse data. In concept learning, the data might correspond to several example objects (Fig. 1) and the hypotheses to possible extensions of the concept. Why, given three examples of different kinds of horses, would a child generalize the word “horse” to all and only horses (h1)? Why not h2, “all horses except Clydesdales”; h3, “all animals”; or any other rule consistent with the data? Likelihoods favor the more specific patterns, h1 and h2; it would be a highly suspicious coincidence to draw three random examples that all fall within the smaller sets h1 or h2 if they were actually drawn from the much larger h3. The prior favors h1 and h3, because as more coherent and distinctive categories, they are more likely to be the referents of common words in language. Only h1 scores highly on both terms. Likewise, in causal learning, the data could be co-occurences between events; the hypotheses, possible causal relations linking the events. Likelihoods favor causal links that make the co-occurence more probable, whereas priors favor links that fit with our background knowledge of what kinds of events are likely to cause which others; for example, a disease (e.g., cold) is more likely to cause a symptom (e.g., coughing) than the other way around.
...the Bayesian approach lets us move beyond classic either-or dichotomies that have long shaped and limited debates in cognitive science: “empiricism versus nativism,” “domain-general versus domain-specific,” “logic versus probability,” “symbols versus statistics.” Instead we can ask harder questions of reverse-engineering, with answers potentially rich enough to help us build more humanlike AI systems.
Monday, March 21, 2011
I just got around to having a look at David Brooks' relatively new psychology blog, which has some good stuff. (I spend virtually no time looking at other blogs, not having enough time to even do this one as well as I would like. Also, blogs tend to get into recycling each other, as if taking in each other's laundry).
PLEASE tell me that Brooks has a research staff looking up this stuff for him. If he is actually doing this himself, in addition to writing two NYTimes Op-Ed pieces a week, doing frequent lectures and TV appearance, carrying on several online dialogues, promoting his new book, he is a bloody superhuman....
I trust that my friend and colleague John Young will not mind my passing on his email to the Chaos and Complexity Seminar group to which we both belong at the Univ. of Wisconsin:
The NYT article "Derivatives, as Accused by Buffett"(Here is PDF of Buffett's testimony before the Financial Crisis Inquiry Commission) has words that translate into our lingo of Chaos (produced by strong signals interacting nonlinearly) and Complexity (many coupled degrees of freedom). Excerpts:
The problems arise, Mr. Buffett said, when a bank’s exposure to derivatives balloons to grand proportions and uninformed investors start using them. It “doesn’t make much difference if it’s, you know, one guy rolling dice against another, and they’re doing $5 a throw.
But it makes a lot of difference when you get into big numbers.” What worries him most is the big financial institutions that have millions of contracts. “If I look at JPMorgan, I see two trillion in receivables, two trillion in payables, a trillion and seven netted off on each side and $300 billion remaining, maybe $200 billion collateralized,” he said, walking through his thinking.
...and High Amplitude Chaos
“That’s all fine. But I don’t know what discontinuities are going to do to those numbers overnight if there’s a major nuclear, chemical or biological terrorist action that really is disruptive to the whole financial system.”
And, Floyd Norris offers an interesting article that expands on this last point in last Friday's NYTimes, noting how there was general acceptance of the idea that regulators had developed sophisticated risk models to prevent a disaster in both the financial and nuclear power industries. They both were wrong.
Friday, March 18, 2011
As a prelude to your possible weekend libations, I thought I would note that in a recent issue of PNAS Francisco Ayala offers a brief history of wine grape cultivation from ancient to modern times, and provides this summary on the health effects of wine consumption.
The top country producers and consumers of wine are Italy, France, and Spain. The United States is fourth in production but largest in total consumption because of its large population. The average consumption per person in the United States, although it is gradually increasing, is about one glass per week compared with nearly one glass per day in the three Mediterranean countries, where it is slowly decreasing. In wine consumption per person, the United States ranks 57th in the world.
In 1819, an Irish physician, Dr. Samuel Black, attributed the much lower prevalence of angina pectoris in France than in Ireland to the “French habits and modes of living”. There is now a wealth of evidence that moderate drinking of wine, particularly red, decreases the risk for mortality. A plot of risk for dying against alcohol consumption yields a J-shaped curve, showing that moderate drinkers outlive both teetotalers and heavy drinkers and that teetotalers outlive heavy drinkers (see figure). The beneficial effects of moderate red wine drinking are often attributed to resveratrol and other polyphenols, antioxidants derived from the skins, seeds, and stems of grapes. Beneficial health effects include, first and foremost, lowered risk for cardiovascular disease but also for some forms of cancer, stroke and other cerebrovascular accidents, type 2 diabetes, macular degeneration, Alzheimer's disease, vascular dementia, kidney stones and gallstones, bone density, hip fracture, and other diseases.
The J-shaped relationship between wine drinking and risk for death. (One glass of red wine contains approximately 10 grams of alcohol.)
Thursday, March 17, 2011
Ceci and Williams offer an interesting perspective in an open access article. They conclude that women's underrepresentation in the sciences is due not to sex discrimination in grant and manuscript reviewing, interviewing, and hiring, but rather:
...primarily to factors surrounding family formation and childrearing, gendered expectations, lifestyle choices, and career preferences—some originating before or during adolescence — and secondarily to sex differences at the extreme right tail of mathematics performance on tests used as gateways to graduate school admission.Here is their abstract:
Explanations for women's underrepresentation in math-intensive fields of science often focus on sex discrimination in grant and manuscript reviewing, interviewing, and hiring. Claims that women scientists suffer discrimination in these arenas rest on a set of studies undergirding policies and programs aimed at remediation. More recent and robust empiricism, however, fails to support assertions of discrimination in these domains. To better understand women's underrepresentation in math-intensive fields and its causes, we reprise claims of discrimination and their evidentiary bases. Based on a review of the past 20 y of data, we suggest that some of these claims are no longer valid and, if uncritically accepted as current causes of women's lack of progress, can delay or prevent understanding of contemporary determinants of women's underrepresentation. We conclude that differential gendered outcomes in the real world result from differences in resources attributable to choices, whether free or constrained, and that such choices could be influenced and better informed through education if resources were so directed. Thus, the ongoing focus on sex discrimination in reviewing, interviewing, and hiring represents costly, misplaced effort: Society is engaged in the present in solving problems of the past, rather than in addressing meaningful limitations deterring women's participation in science, technology, engineering, and mathematics careers today. Addressing today's causes of underrepresentation requires focusing on education and policy changes that will make institutions responsive to differing biological realities of the sexes. Finally, we suggest potential avenues of intervention to increase gender fairness that accord with current, as opposed to historical, findings.
Wednesday, March 16, 2011
The executive editor of the New York Times, Bill Keller, wrote a fascinating piece in last Sunday's New York Times Magazine. It sort of hit me between the eyes, because I am self conscious (feel like I'm being lazy, in fact, even though this blog takes a bloody lot of time to do) - about the fact that this MindBlog is mainly an aggregator, rather than a reporter of my own original ideas. (Emails from reader grateful at having been made aware of this or that bit of information temper my self flagellation just a bit.) Keller notes how:
...our fascination with capital-M Media is so disengaged from what really matters.Then he goes after the Huffington Post (which I glance at daily):
...Much as the creative minds of Wall Street found a way to divorce investing from the messiness of tangible assets, enabling clients to buy shadows of shadows, we in Media have transcended earthbound activities like reporting, writing or picture-taking and created an abstraction — a derivative — called Media in which we invest our attention and esteem. Possibly I am old-fashioned, but in these days when actual journalists are laboring at actual history, covering the fever of democracy in Arab capitals and the fever of austerity in American capitals, the obsession with the theoretical and self-referential feels to me increasingly bloodless...We have flocks of media oxpeckers who ride the backs of pachyderms, feeding on ticks. We have a coterie of learned analysts...who meditate on the meta of media. By turning news executives into celebrities, we devalue the institutions that support them, the basics of craft and the authority of editorial judgment.
“Aggregation” can mean smart people sharing their reading lists, plugging one another into the bounty of the information universe. It kind of describes what I do as an editor. But too often it amounts to taking words written by other people, packaging them on your own Web site and harvesting revenue that might otherwise be directed to the originators of the material. In Somalia this would be called piracy. In the mediasphere, it is a respected business model...The queen of aggregation is, of course, Arianna Huffington, who has discovered that if you take celebrity gossip, adorable kitten videos, posts from unpaid bloggers and news reports from other publications, array them on your Web site and add a left-wing soundtrack, millions of people will come.
...some of the great aggregators, Huffington among them, seem to be experiencing a back-to-the-future epiphany. They seem to have realized that if everybody is an aggregator, nobody will be left to make real stuff to aggregate. Huffington has therefore hired a small stable of experienced journalists, including a few from here, to produce original journalism about business and politics...if serious journalism is about to enjoy a renaissance, I can only rejoice. Gee, maybe we can even get people to pay for it.
Tuesday, March 15, 2011
I'm sure you find yourself annoyed and paralyzed when you face the huge array of options for a single simple item that are offered on drugstore and supermarket shelves. I walked into a Target store here in Fort Lauderdale yesterday wanting to pick up a simple tube of chapstick and spent five minutes trying to figure out which of the 10 or so different varieties might correspond to the single option offered by the 7-11 Stores of my youth. Expanded options don't offer significant new freedom, they induce indecision paralysis. Jonah Lehrer points to a working paper by Sela and Burger that suggests that I am making a metacognitive error. I..
...confuse the array of options and excess of information with importance, which then leads my brain to conclude that this decision is worth lots of time and attention. Call it the drug store heuristic: A cluttered store shelf leads us to automatically assume that a choice must really matter, even if it doesn’t.From the Sela and Burger draft, which describes three experiments supporting their points:
Our central premise is that people use subjective experiences of difficulty while making a decision as a cue to how much further time and effort to spend. People generally associate important decisions with difficulty. Consequently, if a decision feels unexpectedly difficult, due to even incidental reasons, people may draw the reverse inference that it is also important, and consequently increase the amount of time and effort they expend. Ironically, this process is particularly likely for decisions that initially seemed unimportant because people expect them to be easier.
If people form inferences about decision importance from their own decision efforts, then not only might increased perceived importance lead people to spend more time deciding, but increased decision time might, in turn, validate and amplify these perceptions of importance, which might further increase deliberation time. Thus, one could imagine a recursive loop between deliberation time, difficulty, and perceived importance. Inferences from difficulty may not only impact immediate deliberation, but may kick off a quicksand cycle that leads people to spend more and more time on a decision that initially seemed rather unimportant. Quicksand sucks people in, but the worse it seems the more people struggle.
Monday, March 14, 2011
Fried et al. have now taken Libet's classic experiment (electodes on the surface of the head reporting activity in motor cortex in preparation for a movement before the subject is aware of deciding to act) to a whole new level. They recorded the activity of 1019 neurons while twelve subjects performed self-initiated finger movement. (In some cases of intractable epilepsy, intracranial electrodes are used for evaluation prior to neurosurgery. When depth electrodes are inserted into the cortical tissue itself, it is then possible to record the firing patterns of single neurons in awake, behaving humans.) Activity increases prior to volition were mapped with greater detail, and areas that decreased in firing in preparation for movement were also found, suggesting an inhibitory component to volition. Here is the author's summary:
* Progressive changes in firing rates precede self-initiated movements
* Medial frontal cortex units signal volition onset before subjects' awareness
* Prediction level is high (90%) based on neuronal responses in single trials
* Volition could arise from accumulation of ensemble activity crossing a threshold
Understanding how self-initiated behavior is encoded by neuronal circuits in the human brain remains elusive. We recorded the activity of 1019 neurons while twelve subjects performed self-initiated finger movement. We report progressive neuronal recruitment over ∼1500 ms before subjects report making the decision to move. We observed progressive increase or decrease in neuronal firing rate, particularly in the supplementary motor area (SMA), as the reported time of decision was approached. A population of 256 SMA neurons is sufficient to predict in single trials the impending decision to move with accuracy greater than 80% already 700 ms prior to subjects' awareness. Furthermore, we predict, with a precision of a few hundred ms, the actual time point of this voluntary decision to move. We implement a computational model whereby volition emerges once a change in internally generated firing rate of neuronal assemblies crosses a threshold.
From an introductory review by Haggard, a segment relevant to Libet's idea that "free will" might correspond to "free won't" - inhibition of an intended action:
A recent model of volition identified the decision of whether to act or not as an important component of volition. Fried et al.’s data suggest one mechanism that might be involved in this decision. Decreasing neurons might withhold actions until they become appropriate through tonic inhibition and then help to trigger voluntary actions by gradually removing this tonic inhibition. Competitive inhibitory interaction between decreasing and increasing neurons could then provide a circuit for resolving whether to act or withhold action. A similar model has already been proposed for decisions between alternative stimulus-driven actions in lateral premotor cortex. Libet thought that ‘‘veto decisions’’ could represent a form of pure mind-brain causation, with consciousness directly intervening to interrupt the buildup of the readiness potential. Competition between populations of medial frontal neurons may provide a simpler explanation, though it still leaves us hunting for potential "decision" areas that may modulate the competition.
Friday, March 11, 2011
Tomasello's group asks whether chimpanzees can determine that another chimpanzee is guiding its actions not on the basis of visual or auditory perception but on the basis of inferences alone. This is a theoretically important question because Povinelli and others have argued that when chimpanzees seemingly understand the visual perception of others, they are only reacting to overt orienting behaviors. The current study was designed so that chimpanzees were faced with an individual who might or might not be making an inference about where food is hidden — with no diagnostic orienting behaviors at all (the chimpanzee subject could not see the other individual making its choice):
If chimpanzees are faced with two opaque boards on a table, in the context of searching for a single piece of food, they do not choose the board lying flat (because if food was under there it would not be lying flat) but, rather, they choose the slanted one— presumably inferring that some unperceived food underneath is causing the slant. Here we demonstrate that chimpanzees know that other chimpanzees in the same situation will make a similar inference. In a back-and-forth foraging game, when their competitor had chosen before them, chimpanzees tended to avoid the slanted board on the assumption that the competitor had already chosen it. Chimpanzees can determine the inferences that a conspecific is likely to make and then adjust their competitive strategies accordingly.
Thursday, March 10, 2011
Continuation of my abstracting of a few of the answers to the annual question at edge.org, "What scientific concept would improve everybody's cognitive toolkit?":
Clay Shirky - The Pareto Principle - "unfairness" is a law.
You see the pattern everywhere: the top 1% of the population control 35% of the wealth. On Twitter, the top 2% of users send 60% of the messages. In the health care system, the treatment for the most expensive fifth of patients create four-fifths of the overall cost...The Italian economist Vilfredo Pareto undertook a study of market economies a century ago, and discovered that no matter what the country, the richest quintile of the population controlled most of the wealth. The effects of this Pareto Distribution go by many names — the 80/20 Rule, Zipfs Law, the Power Law distribution, Winner-Take-All — but the basic shape of the underlying distribution is always the same: the richest or busiest or most connected participants in a system will account for much much more wealth, or activity, or connectedness than average...this pattern is recursive. Within the top 20% of a system that exhibits a Pareto distribution, the top 20% of that slice will also account for disproportionately more of whatever is being measured, and so on.Daniel Goleman - Anthropocene thinking
...The Pareto distribution shows up in a remarkably wide array of complex systems. Together, "the" and "of" account for 10% of all words used in English. The most volatile day in the history of a stock market will typically be twice that of the second-most volatile, and ten times the tenth-most. Tag frequency on Flickr photos obeys a Pareto distribution, as does the magnitude of earthquakes, the popularity of books, the size of asteroids, and the social connectedness of your friends.
And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world. We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people.
This doesn't mean that such distributions are beyond our ability to affect them. A Pareto curve's decline from head to tail can be more or less dramatic, and in some cases, political or social intervention can affect that slope — tax policy can raise or lower the share of income of the top 1% of a population, just as there are ways to constrain the overall volatility of markets, or to reduce the band in which health care costs can fluctuate.
However, until we assume such systems are Pareto distributions, and will remain so even after any such intervention, we haven't even started thinking about them in the right way; in all likelihood, we're trying to put a Pareto peg in a Gaussian hole.
Our planet has left the Holocene Age and entered what geologists call the Anthropocene Age, in which human systems erode the natural systems that support life...of all the global life-support systems, the carbon cycle is closest to no-return. While such "inconvenient truths" about the carbon cycle have been the poster child for our species' slow motion suicide, that's just part of a much larger picture, with all the eight global life-support systems under attack by our daily habits.
We approach the Anthropocene threat with brains shaped in evolution to survive the previous geological epoch, the Holocene, when dangers were signaled by growls and rustles in the bushes, and it served one well to reflexively abhor spiders and snakes. Our neural alarm systems still attune to this largely antiquated range of danger.
Add to that misattunement to threats our built-in perceptual blindspot: we have no direct neural register for the dangers of the Anthropocene age, which are too macro or micro for our sensory apparatus. We are oblivious to, say, our body burden, the lifetime build-up of damaging industrial chemicals in our tissues.
The fields that hold keys to solutions include economics, neuroscience, social psychology and cognitive science — and their various hybrids. With a focus on Anthropocene theory and practice they might well contribute species-saving insights. But first they have to engage this challenge, which for the most part has remained off their agenda.
When, for example, will neuroeconomics tackle the brain's perplexing indifference to the news about planetary meltdown, let alone how that neural blindspot might be patched? Might cognitive neuroscience one day offer some insight that might change our collective decision-making away from a lemmings' march to oblivion? Could any of the computer, behavioral or brain sciences come up with an information prosthetic that might reverse our course?
Wednesday, March 09, 2011
I had to pass this on (my daughter found it).
Nicholas Wade has done a fascinating article on Franis Fukayama and his new book "The Origins of Political Order." (Which I've pre-ordered for my Amazon Kindle - Publishing date of April 12). The book takes off where E.O. Wilson's "Sociobiology" left off, emphasizing cultural traits built around evolved behaviors like favoring relatives, reciprocal altruism, creating and following rules, and a propensity for warfare. Starting with the transition from tribes to states (which occurred in China 1000 years earlier than in Europe, ~200 BC versus 800 AD, he describes the natural selection that occurred as European countries tried different formulas for distributing power, with only England and Denmark (almost by accident) developing the essential institutions of a strong state, the rule of law, and mechanisms to hold the ruler accountable. This successful formula then became adopted by other European states, through a kind of natural selection that favored the most successful variation. The book stops with the French revolution, and a subsequent book will continue to the present. Fukayama still thinks the modern liberal state is the "end of history" (The title of his most famous book).
In a parallel universe with no feudalism, European rulers might have been absolute, just like those of China. But through the accident of democracy, England and then the United States created a powerful system that many others wish to emulate. The question for China, in Dr. Fukuyama’s view, is whether a modern society can continue to be run through a top-down bureaucratic system with no solution to the bad emperor problem. “If I had to bet on these two systems, I’d bet on ours,” he said.
Tuesday, March 08, 2011
Just after I look at my morning email, in which a blog reader points me to this NPR piece on David Brooks new book "The Social Animal", I open the New York Times and find Brooks' column on this subject. Even though I don't agree with many of his conservative views, I have enormous respect for Brooks' efforts to bring the insights of modern research on how humans really work into the political policy arena. His Op-Ed piece this morning is an exceptionally well done summary of how public policy, guided by the fantasy of the rational citizen, ignores the emotional brain that is really running the show. Below I offer some (slightly rearranged) clips. His view of many policy failures (as in public education) is that they rely
...on an overly simplistic view of human nature. We have a prevailing view in our society — not only in the policy world, but in many spheres — that we are divided creatures. Reason, which is trustworthy, is separate from the emotions, which are suspect. Society progresses to the extent that reason can suppress the passions...the unconscious parts of the mind are most of the mind, where many of the most impressive feats of thinking take place. Second, emotion is not opposed to reason; our emotions assign value to things and are the basis of reason. Finally, we are not individuals who form relationships. We are social animals, deeply interpenetrated with one another, who emerge out of relationships.
This body of research suggests the French enlightenment view of human nature, which emphasized individualism and reason, was wrong. The British enlightenment, which emphasized social sentiments, was more accurate about who we are. It suggests we are not divided creatures. We don’t only progress as reason dominates the passions. We also thrive as we educate our emotions.
Now hundreds of thousands of researchers are coming up with a more accurate view of who we are. When you synthesize this research, you get different perspectives on everything from business to family to politics. You pay less attention to how people analyze the world but more to how they perceive and organize it in their minds. You pay a bit less attention to individual traits and more to the quality of relationships between people. The research illuminates a range of deeper talents, which span reason and emotion and make a hash of both categories:
Attunement: the ability to enter other minds and learn what they have to offer.
Equipoise: the ability to serenely monitor the movements of one’s own mind and correct for biases and shortcomings.
Metis: the ability to see patterns in the world and derive a gist from complex situations.
Sympathy: the ability to fall into a rhythm with those around you and thrive in groups.
Limerence: This isn’t a talent as much as a motivation. The conscious mind hungers for money and success, but the unconscious mind hungers for those moments of transcendence when the skull line falls away and we are lost in love for another, the challenge of a task or the love of God. Some people seem to experience this drive more powerfully than others.
Frank Kell reviews experiments that show that young children are often quite adept at uncovering statistical and causal patterns, and that many foundations of scientific thought are built impressively early in our lives. Infants make causal interpretations by integrating information in ways that closely mirror adults...certain sequences of events automatically elicit thoughts of causation at all ages. In addition to figuring out the causal relations underlying novel devices, children are also sensitive to highly abstract causal patterns associated with specific “domains” that correspond roughly to...biology, physical mechanics, and psychology.
...an infant learning language, upon hearing streams of syllables, not only has to notice how often certain syllables occur but also needs to infer higher-order patterns arising from those syllables. One study showed that 5-month-old infants can handle this challenge by rapidly tracking not only the sounds of the syllables but also visual patterns associated with each syllable. In the experiment, infants looking at a computer screen were repeatedly presented with abstract patterns of syllables and shapes. An “ABB” pattern, for instance, could be represented by certain shapes corresponding to the syllables “di ga ga.” When presented with a new pattern (ABA) with new syllables—such as “le ko le”—the infants looked longer at the shapes on the screen than if the new syllables were in the old ABB pattern. This suggests that they recognized it as a new, unfamiliar correlation...6-month-olds can take the next step and infer causation from certain kinds of correlations. In these experiments, researchers measured how long infants looked at animations showing “collisions” of shapes. In some animations, one object “launched” a second one, causing it to move, as when two billiard balls collide. When shown animations in which the balls reversed roles, infants looked longer at the new pattern than at the original one.
in thinking about biological phenomena such as disease or inheritance, children may make different inferences from patterns of covariation than they do for physical phenomena such as collisions or rotating gears. Another top-down expectation that children bring to living things, but not to artifacts, is an “essentialist bias”: the idea that something you can't see (e.g., “microstructural stuff ”) causes what you can see (“surface phenomena” such as feathers or fur) and is the essence of the thing being observed. This is a guiding principle in much of formal science, even as it can also lead to false inferences, such as that species are defined by fixed essences.
Monday, March 07, 2011
For the last three years, Gallup has called 1,000 randomly selected American adults each day and asked them about their emotional status, work satisfaction, eating habits, illnesses, stress levels and other indicators of their quality of life. Here is an interactive graphic summary of the results, sorted by congressional districts, and a related article.
We can't have our cake and eat it too, and every major religious tradition advocates forsaking pleasure in the moment to realize greater, deferred rewards. A recent study by Moffitt et al. statistically controls for the potential confounds of intelligence and family background in examining life outcomes in a large, nationally representative sample of New Zealanders. From their abstract:
...is self-control important for the health, wealth, and public safety of the population? Following a cohort of 1,000 children from birth to the age of 32 y, we show that childhood self-control predicts physical health, substance dependence, personal finances, and criminal offending outcomes, following a gradient of self-control. Effects of children's self-control could be disentangled from their intelligence and social class as well as from mistakes they made as adolescents. In another cohort of 500 sibling-pairs, the sibling with lower self-control had poorer outcomes, despite shared family background. Interventions addressing self-control might reduce a panoply of societal costs, save taxpayers money, and promote prosperity.
Friday, March 04, 2011
I've finally had time to check out the abundant links to background articles in John Tierney's Feb. 21 article in the NYTimes. I had not realized that there was such firm evidence that men can unconsciously detect, via changes in womens' cues and behavior and perhaps via pheromones, when women are at the ovulation stage of their menstrual cycle.
...recent studies have found large changes in cues and behavior when a woman is at this stage of peak fertility. Lap dancers get much higher tips (unless they’re taking birth-control pills that suppress ovulation, in which case their tips remain lower). The pitch of a woman’s voice rises. Men rate her body odor as more attractive and respond with higher levels of testosterone.I would recommend reading the whole article, which describes experiments supporting rather elaborate evolutionary psychology theories about maintaining breeding relationships while also enhancing genetic diversity, such as:
...the “good genes” evolutionary explanation for adultery: a quick fling with a good-looking guy can produce a child with better genes, who will therefore have a better chance of passing along the mother’s genes. But this sort of infidelity is risky if the woman’s unsexy long-term partner finds out and leaves her alone to raise the child. So it makes sense for her to limit her risks by being unfaithful only at those times she’s fertile....By that same evolutionary logic, it makes sense for her partner to be most worried when she’s fertile.
Thursday, March 03, 2011
How long does a hug lasts? The quick answer is about 3 seconds, according to Nagy's study of the post-competition embraces of Olympic athletes. The long answer is more profound. A hug lasts about as much time as many other human actions and neurological processes, which supports a hypothesis that we go through life perceiving the present in a series of 3-second windows...Crosscultural studies dating back to 1911 have shown that people tend to operate in 3-second bursts. Goodbye waves, musical phrases, and infants' bouts of babbling and gesturing all last about 3 seconds. Many basic physiological events, such as relaxed breathing and certain nervous system functions do, too. And several other species of mammals and birds follow the general rule in their body-movement patterns. A 1994 study of giraffes, okapis, roe deer, raccoons, pandas, and kangaroos living in zoos, for example, found that although the duration of the animals' every move, from chewing to defecating, varied considerably, the average was...3 seconds...The results reinforce an idea current among some psychologists that intervals of about 3 seconds are basic temporal units of life that define our perception of the present moment...the "feeling of nowness" tends to last 3 seconds.
Many studies have shown that people are excessively optimistic about marriage, work, sports, health, and life expectancy. Maasey et al. have asked: does such optimism persists as people acquire feedback from real-world experiences? And, second, is optimism actually caused by desire or hope? These are important question, for many of life’s most consequential decisions (e.g., about health, investments, or relationships) feature both strong preferences and the chance to revise beliefs in light of new information (e.g., medical exams, balance statements, or a second date). They asked National Football League (NFL) fans to predict game outcomes before each week of the 17-week NFL season. Studying football predictions offered four important benefits:
First, the 17-week season provided participants with quick, frequent, and unambiguous feedback over a significant (and nonarbitrary) duration of time, and thus it provided an ideal context for evaluating the effect of experience on optimism. Second, NFL fans’ preferences for their favorite teams are strong and often held with a degree of intensity unlikely to be generated by incentives offered in the laboratory. Third, a number of alternative explanations for the effects of desirability, such as those implicating team strength and familiarity, can be controlled methodologically and statistically. Finally, unlike predictions in other emotionally important domains, football predictions offer the benefit of objective benchmarks—both ex ante and ex post—against which the accuracy of predictions can be evaluated.Their results:
We found that people are optimistic in their predictions—they judge preferred outcomes to be more likely than nonpreferred outcomes. We extended this observation in two important ways. First, we showed that optimism persists despite extensive experience—football fans are as optimistic after 4 months of feedback as they are after 4 weeks of feedback. Second, we found that desirability fueled this optimistic bias. Using four distinct tests, and a wide variety of control variables, we found that optimistic predictions were robust and uniquely related to the desirability of the outcome.
Wednesday, March 02, 2011
It's a common assumption that happier people live longer, but the convincing data on this has not been easy to come by. Studies have offered widely different and competing findings - some finding no causation or reverse causation, others suggesting that unidentified, unobserved factors influence both happiness and longevity and health. Frey points to work by Diener and Chan showing that many kinds of studies, using different methods, conclude that happiness has a positive causal effect on longevity and physiological health. One of the studies noted by their survey is a meta-analysis based on 24 studies that estimates that happy people live 14% longer than persons who report that they are unhappy. In a survey of people living in industrial countries, happier people enjoy an increased longevity of between 7.5 and 10 years. Happier people are also less likely to commit suicide, and they are less often the victims of accidents...In longitudinal studies individuals are followed over many years, to identify whether the happier ones live longer. In the famous "nun study" researchers asked young women about their subjective happiness level before they entered a monastery. Those who perceived themselves to be happier died at a median age of 93.5 years. Those who considered themselves to be less happy died at a median age of 86.6 years.
This work by Martens et al. demonstrates that our conscious attention to one task area (such as noting geometric shapes) can sensitize unconscious processes in that area such as subliminal priming (facilitatory effects elicited by masked stimuli that are not consciously perceived.) This shows that unconscious processing is crucially dependent on top-down attention, and contrasts with the classical view that unconscious cognition is characterized by the lack of top-down influences.
Are unconscious processes susceptible to attentional influences? In two subliminal priming experiments, we investigated whether task sets differentially modulate the sensitivity of unconscious processing pathways. We developed a novel procedure for masked semantic priming of words (Experiment 1) and masked visuomotor priming of geometrical shapes (Experiment 2). Before presentation of the masked prime, participants performed an induction task in which they attended to either semantic or perceptual object features designed to activate a semantic or perceptual task set, respectively. Behavioral and electrophysiological effects showed that the induction tasks differentially modulated subliminal priming: Semantic priming, which involves access to conceptual meaning, was found after the semantic induction task but not after the perceptual induction task. Visuomotor priming was observed after the perceptual induction task but not after the semantic induction task. These results demonstrate that unconscious cognition is influenced by attentional control. Unconscious processes in perceptual and semantic processing streams are coordinated congruently with higher-level action goals.
Tuesday, March 01, 2011
Being a self critical person who rarely gets off his own case, I was considerably softened by a Sarah Parker-Pope article in this morning's Science section of the NYTimes on:
...a burgeoning new area of psychological research called self-compassion — how kindly people view themselves...The research suggests that giving ourselves a break and accepting our imperfections may be the first step toward better health. People who score high on tests of self-compassion have less depression and anxiety, and tend to be happier and more optimistic. Preliminary data suggest that self-compassion can even influence how much we eat and may help some people lose weight.The article contains links to several original research papers.
Egalitarian behavior is considered to be a species-typical component of human cooperation. Human adults tend to share resources equally, even if they have the opportunity to keep a larger portion for themselves. Recent experiments have suggested that this tendency emerges fairly late in human ontogeny, not before 6 or 7 years of age. Here we show that 3-year-old children share mostly equally with a peer after they have worked together actively to obtain rewards in a collaboration task, even when those rewards could easily be monopolized. These findings contrast with previous findings from a similar experiment with chimpanzees, who tended to monopolize resources whenever they could. The potentially species-unique tendency of humans to share equally emerges early in ontogeny, perhaps originating in collaborative interactions among peers.