Friday, July 11, 2008

Resveratrol - protection from ravages of aging.

In mice, at least....An article in Wired Magazine points to a multi-authored study in Cell Metabolism:
A small molecule that safely mimics the ability of dietary restriction (DR) to delay age-related diseases in laboratory animals is greatly sought after. We and others have shown that resveratrol mimics effects of DR in lower organisms. In mice, we find that resveratrol induces gene expression patterns in multiple tissues that parallel those induced by DR and every-other-day feeding. Moreover, resveratrol-fed elderly mice show a marked reduction in signs of aging, including reduced albuminuria, decreased inflammation, and apoptosis in the vascular endothelium, increased aortic elasticity, greater motor coordination, reduced cataract formation, and preserved bone mineral density. However, mice fed a standard diet did not live longer when treated with resveratrol beginning at 12 months of age. Our findings indicate that resveratrol treatment has a range of beneficial effects in mice but does not increase the longevity of ad libitum-fed animals when started midlife.

Where Ritalin acts in the brain to focus attention.

An interesting piece of work from Berridge's lab here at the University of Wisconsin shows that the cognition and attention enhancing drug Ritalin (methylphenidate, MPH) fine-tunes the functioning of neurons in the prefrontal cortex (PFC), which is involved in attention, decision-making and impulse control. While it enhances the efflux of the neurotransmitters norepinephrine and dopamine in PFC, it appears to have minimal effects elsewhere.

Only working memory–enhancing doses of MPH increased the responsivity of individual PFC neurons and altered neuronal ensemble responses within the PFC. The effects were not observed outside the PFC (i.e., within somatosensory cortex). In contrast, high-dose MPH profoundly suppressed evoked discharge of PFC neurons. These observations suggest that preferential enhancement of signal processing within the PFC, including alterations in the discharge properties of individual PFC neurons and PFC neuronal ensembles, underlie the behavioral/cognitive actions of low-dose psychostimulants.

Thursday, July 10, 2008

As new kind of science, as data deluge makes the scientific method obsolete...

An article by Chris Anderson in Wired Magazine, pointed out to me by my son Jon, argues that science as we have known it has ended. The argument is that the quest for knowledge that used to begin with grand theories now, in the petabyte age, begins with massive amounts of data. Google has set the new model for science. I show some clips here, and then follow with the contra argument by John Timmers that follows):
Google conquered the advertising world with nothing more than applied mathematics. It didn't pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right...Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required. That's why Google can translate languages without actually "knowing" them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.

The hypothesize-model-test model of science is becoming obsolete...The models we were taught in school about "dominant" and "recessive" genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton's laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility...the more we learn about biology, the further we find ourselves from a model that can explain it...There is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It's just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.

This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including Google File System, IBM's Tivoli, and an open source version of Google's MapReduce. Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.
Here is the immediate rejoinder to this article from John Timmers at Ars Technica.
Every so often, someone (generally not a practicing scientist) suggests that it's time to replace science with something better. The desire often seems to be a product of either an exaggerated sense of the potential of new approaches, or a lack of understanding of what's actually going on in the world of science. This week's version, which comes courtesy of Chris Anderson, the Editor-in-Chief of Wired, manages to combine both of these features in suggesting that the advent of a cloud of scientific data may free us from the need to use the standard scientific method.

It's easy to see what has Anderson enthused. Modern scientific data sets are increasingly large, comprehensive, and electronic. Things like genome sequences tell us all there is to know about the DNA present in an organism's cells, while DNA chip experiments can determine every gene that's expressed by that cell. That data's also publicly available—out in the cloud, in the current parlance—and it's being mined successfully. That mining extends beyond traditional biological data, too, as projects like WikiProteins are also drawing on text-mining of the electronic scientific literature to suggest connections among biological activities.

There is a lot to like about these trends, and little reason not to be enthused about them. They hold the potential to suggest new avenues of research that scientists wouldn't have identified based on their own analysis of the data. But Anderson appears to take the position that the new research part of the equation has become superfluous; simply having a good algorithm that recognizes the correlation is enough.

The source of this flight of fancy was apparently a quote by Google's research director, who repurposed a cliché that most scientists are aware of: "All models are wrong, and increasingly you can succeed without them." And Google clearly has. It doesn't need to develop a theory as to why a given pattern of links can serve as an indication of valuable information; all it needs to know is that an algorithm that recognizes specific link patterns satisfies its users. Anderson's argument distills down to the suggestion that science can operate on the same level—mechanisms, models, and theories are all dispensable as long as something can pick the correlations out of masses of data.

Science 2.0 I can't possibly imagine how he comes to that conclusion. Correlations are a way of catching a scientist's attention, but the models and mechanisms that explain them are how we make the predictions that not only advance science, but generate practical applications. One only needs to look at a promising field that lacks a strong theoretical foundation—high-temperature superconductivity springs to mind—to see how badly the lack of a theory can impact progress. Put in more practical terms, would Anderson be willing to help test a drug that was based on a poorly understood correlation pulled out of a datamine? These days, we like our drugs to have known targets and mechanisms of action and, to get there, we need standard science.

Anderson does provide two examples that he feels support his position, but they actually appear to undercut it. He notes that we know quantum mechanics is wrong on some level, but have been unable to craft a replacement theory after decades of work. But he neglects to mention two key things: without the testable predictions made by the theory, we'll never be able to tell how precisely it is wrong and, in those decades where we've failed to find a replacement, the predictions of quantum mechanics have been used to create the modern electronics industry, with the data cloud being a consequence of that.

If anything, his second example is worse. We can now perform large-scale genetic surveys of the life present in remote environments, such as the far reaches of the Pacific. Doing so has informed us that there's a lot of unexplored biodiversity on the bacterial level; fragments of sequence hint at organisms we've never encountered under a microscope. But as Anderson himself notes, the only thing we can do is make a few guesses as to the properties of the organisms based on who their relatives are, an activity that actually requires a working scientific theory, namely evolution. To do more than that, we need to deploy models of metabolism and ecology against the bacteria themselves.

Overall, the foundation of the argument for a replacement for science is correct: the data cloud is changing science, and leaving us in many cases with a Google-level understanding of the connections between things. Where Anderson stumbles is in his conclusions about what this means for science. The fact is that we couldn't have even reached this Google-level understanding without the models and mechanisms that he suggests are doomed to irrelevance. But, more importantly, nobody, including Anderson himself if he had thought about it, should be happy with stopping at this level of understanding of the natural world.

Meditation and executive function - untraining the brain.

A MindBlog reader passes on this link to a reposting of a interesting article by Chris Chatham on how easily normal conflicts in making decisions can be lessened by changes in attention. My May 1 post references other work on this topic.

Wednesday, July 09, 2008

Worried sick...achieving wellness?

I always enjoy it when a good curmudgeonly antidote comes along to temper bright eyed optimism. Such a contrast is provided by Zuger's review of very different books by Snyderman and Hadler. Synderman:
With chirpy, can-do optimism...recapitulates the standard wisdom. Watch your diet, exercise, lose weight, stop smoking, be screened regularly for a variety of dire illnesses, rein in cholesterol and blood sugar, stay in touch with your doctor and be sure to check out those aches and pains pronto, just in case. So speaks the medical establishment.
While Hadler:
..who is a longtime debunker of much the establishment holds dear...reminds us...we are all going to die...holding every dire illness at bay forever is simply not an option. The real goal is to reach a venerable age — say 85 — more or less intact. And the statistics tell Dr. Hadler that ignoring most of the advice Dr. Snyderman offers is the way to do it.
An excerpt from Hadler's book:
Daily, we are offered the image of the baby-boom generation going on forever, making impossible demands on successive generations to provide pensions, health care, and community. That, too, is fatuous. However, more of us are living longer than did our parents. Clearly, the likelihood that we will enjoy life as an octogenarian has increased over the course of the twentieth century. Far less clear is whether the likelihood of becoming a nonagenarian has increased similarly. It has certainly not done so at anything like the same rate as the likelihood of being an octogenarian. The effect is so striking that it has caused many of us to wonder if there is not a fixed longevity for our species, set around eighty-five years of age. Some have likened this to a warranty: you are off warranty at eighty-five, beyond is a bonus, and well beyond is a statistical oddity. This projected demographic is consistent with current population trends. With one caveat, these hard facts seem unlikely to change. It is possible that molecular biology can alter the fixed longevity of our species. But don't hold your breath. None of us will live to see that — and maybe no one ever will.

Eighty-five (+/- a little bit) appears to be the programmed life expectancy for our species. I grant that the science is imperfect. But eighty-five is a linchpin of my personal philosophy of life. I, for one, do not care how many diseases I harbor on my eighty-fifth birthday, though I prefer not to know that they are creeping up on me. I, for one, do not care which of these diseases carries me off as long as the leaving is gentle and the legacy meaningful. Perhaps the best we can reasonably hope for is eighty-five years of life free of morbidities that overwhelm our wherewithal to cope, then to die in our sleep on our eighty-fifth birthday.

Ecocultural basis of cognition

Farmers and fishermen are more holistic than herders. Uskul et al. offer a fascinating study on factors influencing holistic versus more focused perception:
It has been proposed that social interdependence fosters holistic cognition, that is, a tendency to attend to the broad perceptual and cognitive field, rather than to a focal object and its properties, and a tendency to reason in terms of relationships and similarities, rather than rules and categories. This hypothesis has been supported mostly by demonstrations showing that East Asians, who are relatively interdependent, reason and perceive in a more holistic fashion than do Westerners. We examined holistic cognitive tendencies in attention, categorization, and reasoning in three types of communities that belong to the same national, geographic, ethnic, and linguistic regions and yet vary in their degree of social interdependence: farming, fishing, and herding communities in Turkey's eastern Black Sea region. As predicted, members of farming and fishing communities, which emphasize harmonious social interdependence, exhibited greater holistic tendencies than members of herding communities, which emphasize individual decision making and foster social independence. Our findings have implications for how ecocultural factors may have lasting consequences on important aspects of cognition.

Tuesday, July 08, 2008

Brain regions active during different economic decisions.

The editor's choice section of science magazine spotlights an interesting paper in J. Neurosci:
When we make economic decisions, for example the purchase of a good or a service, our brain has to perform at least three computations. First, it has to assess the goal value of the good: in economic terms, our maximal willingness to pay. Second, it has to assess the decision value of the good: the goal value minus the unavoidable costs. Third, there is a prediction error, which indicates the deviation from one's expectations of reward; the prediction error is positive when something better than expected happens and negative when the opposite occurs. Unfortunately, these three related quantities are intermingled and are often highly correlated, making it challenging to isolate the neural regions performing these computations.

Hare et al. have attempted to measure goal value, decision value, and prediction error in a single neuroimaging task so that they could dissociate these parameters. They found that ventral striatum activation reflected prediction error and not goal or decision value. However, activity in the medial orbitofrontal cortex and the central orbitofrontal cortex correlated with goal value and decision value, respectively.
Here is a summary figure from the paper:

Figure - Combined activation maps for goal values (GVs), decision values (DVs), and prediction errors (PEs). Activity correlated with GVs in the mOFC is shown in red, activity correlated with DVs in the cOFC is shown in yellow, and activity correlated with PEs in the ventral striatum is shown in green.

Another Happiness Survey

The Univ. of Michigan press release describing work from the World Values Survey based at the University of Michigan Institute for Social Research. Denmark is the happiest nation in the world and Zimbabwe the unhappiest. The United States ranks 16th on the list, immediately after New Zealand.

Monday, July 07, 2008

Piazolla - Otono Portena

Here is the second Piazolla tango we did at the 6/29/08 Sunday musical at Twin Valley.

Brain Foods...

Gómez-Pinilla contributes a review article to the latest issue of Nature Reviews Neuroscience on how various dietary factors, in addition to some gut and brain hormones, increase the resistance of neurons to insults and promote mental fitness. I pass on one figure dealing with dietary omega-3 fatty acids, followed by a summary table.



The omega-3 fatty acid docosahexaenoic acid (DHA), which humans mostly attain from dietary fish, can affect synaptic function and cognitive abilities by providing plasma membrane fluidity at synaptic regions. DHA constitutes more than 30% of the total phospholipid composition of plasma membranes in the brain, and thus it is crucial for maintaining membrane integrity and, consequently, neuronal excitability and synaptic function. Dietary DHA is indispensable for maintaining membrane ionic permeability and the function of transmembrane receptors that support synaptic transmission and cognitive abilities. Omega-3 fatty acids also activate energy-generating metabolic pathways that subsequently affect molecules such as brain-derived neurotrophic factor (BDNF) and insulin-like growth factor 1 (IGF1). IGF1 can be produced in the liver and in skeletal muscle, as well as in the brain, and so it can convey peripheral messages to the brain in the context of diet and exercise. BDNF and IGF1 acting at presynaptic and postsynaptic receptors can activate signalling systems, such as the mitogen-activated protein kinase (MAPK) and calcium/calmodulin-dependent protein kinase II (CaMKII) systems, which facilitate synaptic transmission and support long-term potentiation that is associated with learning and memory.


(Click to enlarge table.)

Friday, July 04, 2008

MRI of mental time travel.

Arzy et al. make the interesting observation that one's imagined self location influences the neural activity related to mental time travel. Slightly edited clips from the article:
A fundamental characteristic of human conscious experience is the ability to not only experience the present moment but also to recall the past and predict the future, or to "travel" back and forth in time, a facility that is called "mental time travel" (MTT)...Converging evidence from recent memory research suggests that re-experiencing and pre-experiencing an event rely on similar neural mechanisms. Similar strategies and the same brain regions are found to be used in imagining past and future events, as future predictions may be based on past memories... when changing the location of one's self in time to past or future, one does not only recall and predict, but one also changes one's mental egocentric perspective on life events. Moreover, from these new self-locations in time, other life events might be regarded differently with respect to their relations to past or future. Thus, when imagining oneself as 10 years younger, last year's events are in the future (relative future) in relation to the initially imagined self-location in time, and vice versa (relative past).
Since earlier studies had shown behavioral and electrophysiological differences between judgments about one's own body while taken from one's actual spatial self-location versus different imagined self-locations, and given evidence that shared mechanisms process time and space in the brain, the authors developed a behavioral paradigm to determine if differences are found not only between different self-locations in time (past, now, and future), but also while imagining events in the relative past or the relative future. They followed neural correlates of MTT using behavioral measures, evoked potential (EP) mapping, and electrical neuroimaging in healthy adult participants.


Stimuli and procedure. The three different self-locations in time (past, now, and future) are shown. Participants were asked to mentally imagine themselves in one of these self-locations, and from these self-locations to judge whether different self or nonself events (e.g., top row) already happened (relative past, darker colors) or are yet to happen (relative future, lighter colors).
Their work confirmed that:
...that MTT is composed of two different cognitive processes: absolute MTT, which is the location of the self to different points in time (past, present, or future), and relative MTT, which is the location of one's self with respect to the experienced event (relative past and relative future). These processes recruit a network of brain areas in distinct time periods including the occipitotemporal, temporoparietal, and anteromedial temporal cortices. Our findings suggest that in addition to autobiographical memory processes, the cognitive mechanisms of MTT also involve mental imagery and self-location, and that relative MTT, but not absolute MTT, is more strongly directed to future prediction than to past recollection.

Generators of MTT map are localized to the right temporoparietal, occipitotemporal, and left anteromedial temporal cortices.

When your brain Lies to You

Even when a lie is presented with a disclaimer, people often later remember it as true. A brief review in the OpEd section of the NYTimes shows how a well documented feature of our memory, source amnesia, might lead 10 % of us to thinking that Barack Obama is a Muslim.

Thursday, July 03, 2008

Brain markers that predict vulnerability to psychosis.

Honey et al. offer an interesting study in the Journal for Neuroscience. As indicated in these slightly edited clips from text and abstract:
They used a drug model of psychosis to relate presymptomatic physiology to symptom outcome. Ketamine induces transient psychotic symptoms in healthy volunteers and exacerbates existing symptoms in patients. They assessed brain responses, separately under placebo and ketamine treatments, in healthy volunteers across four cognitive challenges, each theoretically related to a symptom of psychosis. Two of the tasks (verbal working memory and attention) are associated with negative symptoms, which may result from social and cognitive disengagement attributable to reduced processing capacity of prefrontal cortex, leading to difficulties in concentration and maintaining task set. They predicted that prefrontal activity during the attention and working memory tasks would be associated with vulnerability to negative symptoms under ketamine.

A failure to monitor "inner speech" may provide a mechanism leading to auditory hallucinations, whereby self-generated speech is misattributed externally. Comparing verbal self-monitoring (imagining speech spoken by another person) with inner speech (minimal self-monitoring) increases prefrontal and temporal cortex activation in patients with auditory hallucinations. Ketamine produces auditory illusory experiences similar to the heightened auditory and visual awareness described by patients during the prodromal phase, and it has been suggested that these contribute to the development of hallucinations. The authors predicted that prefrontal and temporal cortex activation during a self-monitoring task would be associated with vulnerability to the auditory illusory experiences under ketamine.

Finally, a sentence completion task was used to engage brain regions associated with semantic processing. Thought disorder involves difficulty in constraining semantic threads of language, making speech disjointed and chaotic, as also observed under ketamine. In patients, the requirement to generate an appropriate semantic response to complete a sentence is associated with increased activation of left frontal and temporal cortex. They predicted that frontotemporal responses to a sentence completion task would predict vulnerability to thought disorder induced by ketamine.

They in fact found that brain responses to cognitive task demands under placebo predict the expression of psychotic phenomena after drug administration. Frontothalamic responses to a working memory task were associated with the tendency of subjects to experience negative symptoms under ketamine. Bilateral frontal responses to an attention task were also predictive of negative symptoms. Frontotemporal activations during language processing tasks were predictive of thought disorder and auditory illusory experiences. A subpsychotic dose of ketamine administered during a second scanning session resulted in increased basal ganglia and thalamic activation during the working memory task, paralleling previous reports in patients with schizophrenia. These results demonstrate precise and predictive brain markers for individual profiles of vulnerability to drug-induced psychosis.

Bias at the ballot box.

Berger et al. provide an interesting demonstration of how susceptible a voter's choice is to environmental cues. The two types of study done are described in Tim Lincoln's review of this work in Nature:
The first was an analysis of results from a general election held in Arizona in 2000, the ballot for which included a proposition to raise state sales tax from 5.0% to 5.6%, to increase education spending. Polling stations included churches, schools, community centres and government buildings.

Berger et al. predicted that voting in a school would produce more support for the proposition than voting in other places. Indeed it did, but not by much compared with other documented effects on voter choice such as order on the ballot paper. Nonetheless, the effect persisted through tests for various other confounding factors (for example, the possibility of a consistently different level of voter turnout at school polling locations).

The second study was a carefully run online experiment that also involved a proposed tax increase to fund schools. The 'voting environment' was manipulated by exposing participants to typical images of schools or control images. The upshot was the same, with the school images prompting greater (and apparently unconscious) support for the initiative than, for example, an image of an office.

All in all, the authors conclude that what they call contextual priming of polling location affects how people vote. They reasonably wonder whether such factors could, for example, influence voting in a church on such matters as gay marriage and stem-cell research.

But here's a thought. In the event of science spending being on the political agenda, why not offer the lab as a polling station? But maybe dim that fluorescent lighting, and persuade all those bearded fellows in white coats to take the day off — or not, as the case may be.

Wednesday, July 02, 2008

A Piazolla Tango

I'm working up the videos of the Sunday musical at Twin Valley mentioned in Monday's post. Here is Anton Piazolla's Invierno Porteno.

Why are musical chords cheerful or melancholy?

In the current issue of American Scientist, Cook and Hayashi offer a fascinating article on the psychoacoustics of harmony perception (PDF here). Major and minor chords entered Western music during the Renaissance, when two-part harmonies were supplanted by three-tone chords. The authors argue that human responses to these chords have a biological basis, rather than being learned (the opinion of most musical theorists). Their acoustical model explains harmony in terms of the relative positions of the three notes in a triad and how their complex higher harmonics, or upper partials, interact with them. Those of you interested in science and music should check out the special Nature series of essays on this topic.

Jean-Philippe Rameau, a French composer and author, wrote his Treatise on Harmony in 1722, one of the first and most influential studies of harmony in Western music. His book noted the profound emotional difference between major and minor chords: “The major mode is suitable for songs of mirth and rejoicing,” he wrote, while the minor mode was suitable for “plaints, and mournful songs.”

From their conclusions, after the analysis section of the paper:
Now that we have a model of how listeners identify a chord as major or minor, we may take the final step and speculate as to why the acoustical valence carries an emotional valence as well. We contend that the emotional symbolism of major and minor chords has a biological basis. Across the animal kingdom, vocalizations with a descending pitch are used to signal social strength, aggression or dominance. Similarly, vocalizations with a rising pitch connote social weakness, defeat or submission. Of course, animals convey these messages in other ways as well, with facial expressions, body posture and so on—but all else being equal, changes in the fundamental frequency of the voice have intrinsic meaning.

This same frequency code has been absorbed, though attenuated, in human speech patterns: A rising inflection is commonly used to denote questions, politeness or deference, whereas a falling inflection signals commands, statements or dominance. How might this translate to a musical context? If we start with a tense, ambiguous chord—for example, the augmented chord containing two 4-semitone intervals— and decrease any one of the three fundamentals by one semitone, the chord will resolve into a major key. It will then have a 5–4, 3–5, or 4–3 semitone structure. Conversely, if we resolve the ambiguous chord by raising any one of the three fundamentals by a semitone, we will obtain a minor chord. The universal emotional response to these chords stems, we believe, directly from an instinctive, preverbal understanding of the frequency code in nature. One of us (Cook) has explored this in more detail (see the bibliography).

Individual tastes and musical styles vary widely. In the West, music has changed over the centuries from styles that employed predominantly the resolved major and minor chords to styles that include more and more dissonant intervals and unresolved chords. Inevitably, some composers have taken this historical trend to its logical extreme, and produced music that fanatically avoids all indications of consonance or harmonic resolution. Such surprisingly colorless “chromatic” music is intellectually interesting, but notably lacking in the ebb and flow of tension and resolution that most popular music employs, and that most listeners crave. Whatever one’s own personal preferences may be for dissonance and unresolved harmonies, some kind of balance between consonance and dissonance, and between harmonic tension and resolution, seems to be essential—genre by genre, and individual by individual—to assure the emotional ups and downs that make music satisfying.

Making Memories, Again

Lasry et al. , in a letter to Science, offer an interesting interpretation of work reported in a previous post, showing that testing of already learned words enhances long-term recall when assessed 1 week later, whereas repeated studying had no beneficial effects. Here are their comments:
In their Report, "The critical importance of retrieval for learning" (15 February, p. 966), J. D. Karpicke and H. L. Roediger III show that delayed recall is optimized, not with repeated studying sessions, but with repeated testing sessions. The authors conclude that "retrieval during tests produces more learning than additional encoding."

We suggest a complementary interpretation. Classically, encoded information becomes consolidated and can later be retrieved. The tacit assumption is that retrieval of a consolidated memory is a read-only mechanism, which does not affect the memory. Recent studies have shown that elicited memories are in fact labile and become reconsolidated following each retrieval. Labile elicited memories require de novo protein synthesis to be maintained, similar to that of newly acquired memories. Neurobiological differences between consolidation and reconsolidation processes were recently described in Science. On the psychological level, reconsolidation is useful for explaining false and biased memories. Reconsolidation also leads to a memory model called multiple-trace theory: Every time a memory is reactivated, a new version of it is reconsolidated, leaving multiple traces of the same memory.

With respect to Karpicke and Roediger's study, we hypothesize that repeated testing (retrieval) should lead to multiple traces (due to repeated reconsolidation), which facilitate recall. Reinterpreting Karpicke and Roediger's results from a multiple-trace reconsolidation perspective supports this hypothesis and provides a new framework for explaining the effectiveness of frequent in-class assessments in pedagogies such as Peer Instruction.

Tuesday, July 01, 2008

The Toronto Village

A vacation picture...sidewalk cafe lunch at Maitland and Church streets.

Discontinuity between human and nonhuman minds?

In a recent issue of Brain and Behavioral Science (BBS) Penn, Holyoak and Povinelli argue for a profound difference in kind, not degree, between human and animal minds. Their suggestions elicit mainly vigorous opposition as well as some support from an array of commentators. Several of the commentators point out evidence for flexible relational capabilities within a physical symbol system exhibited by dolphins and birds. As I read through the debate and its mind-numbing detail I give up on trying to convey a succinct summary, but here is their abstract. (You might compare this with the work of Hauser et al, that I mentioned in a previous post.):
Over the last quarter century, the dominant tendency in comparative cognitive psychology has been to emphasize the similarities between human and nonhuman minds and to downplay the differences as “one of degree and not of kind” (Darwin 1871). In the present target article, we argue that Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds. To wit, there is a significant discontinuity in the degree to which human and nonhuman animals are able to approximate the higher-order, systematic, relational capabilities of a physical symbol system (PSS) (Newell 1980). We show that this symbolic-relational discontinuity pervades nearly every domain of cognition and runs much deeper than even the spectacular scaffolding provided by language or culture alone can explain. We propose a representational-level specification as to where human and nonhuman animals' abilities to approximate a PSS are similar and where they differ. We conclude by suggesting that recent symbolic-connectionist models of cognition shed new light on the mechanisms that underlie the gap between human and nonhuman minds.

Most popular consciousness papers...

For April 2008, from the ASSC archive:
1. Destrebecqz, Arnaud and Peigneux, Philippe (2005) Methods for studying
unconscious learning. In: Progress in Brain Research. Elsevier, pp. 69-80.
1968 downloads from 26 countries. http://eprints.assc.caltech.edu/170/
2. Koriat, A. (2006) Metacognition and Consciousness. In: Cambridge handbook
of consciousness. Cambridge University Press, New York, USA. 1799 downloads
from 29 countries. http://eprints.assc.caltech.edu/175/
3. Sagiv, Noam and Ward, Jamie (2006) Crossmodal interactions: lessons from
synesthesia. In: Visual Perception, Part 2 - Fundamentals of Awareness:
Multi-Sensory Integration and High-Order Perception. Progress in Brain
Research, Volume 155. Elsevier, pp. 259-271. 1089 downloads from 18
countries. http://eprints.assc.caltech.edu/224/
4. Chalmers, David J. (2004) How can we construct a science of
consciousness? In: The Cognitive Neurosciences III. MIT Press, Cambridge,
MA. 1009 downloads from 9 countries. http://eprints.assc.caltech.edu/28/
5. Dehaene, Stanislas and Changeux, Jean-Pierre and Naccache, Lionel and
Sackur, Jérôme and Sergent, Claire (2006) Conscious, preconscious, and
subliminal processing: a testable taxonomy. Trends in Cognitive Science, 10
(5). pp. 204-211. 900 downloads from 13 countries.
http://eprints.assc.caltech.edu/20/