I want to pass on this stimulating Nature review by Daniel Wegner of "Does Consciousness Cause Behavior?", Pockett, Banks, and Gallagher, Eds. MIT Press, Cambridge, MA, 2006.
Imagine a gadget, call it "brain-o-vision," for brain scanning that doesn't create pictures of brains at all. That's right, no orbs spattered with colorful "activations" that need to be interpreted by neuroanatomists. Instead, with brain-o-vision, what a brain sees is what you get--an image of what that brain is experiencing. If the person who owns the brain is envisioning lunch, up pops a cheeseburger on the screen. If the person is reading a book, the screen shows the words. For that matter, if the brain owner is feeling pain, perhaps brain-o-vision could reach out and swat the viewer with a rolled-up newspaper. Brain-o-vision could give us access to another person's consciousness (1). Figure Credit: Joe Sutliff
Technologies for brain-o-vision are beginning to seem possible. We are learning how brain activations map onto emotions, memories, and mental processes, and it won't be long before we might translate activations into Google searches for images of what the brain is thinking. There is a specific brain area linked with face perception (2), for instance, and even a neuron that fires when it sees Jennifer Aniston (3). So why, in principle, shouldn't we be able to scan a brain and discover when it is looking at her--and eventually even learn what she's wearing? Of course, it may be many years to the beta version. But imagine that everything works out and brain-o-vision goes on sale at Wal-Mart. Could the device solve the problem of whether consciousness causes behavior?
With direct evidence of a person's consciousness, we could do science on the question. We could observe regularities in the relation between consciousness (say, a thought of sipping coffee) and behavior (the actual drink). If the consciousness always preceded the behavior (and never occurred without being followed by the behavior), we could arrive at the inductive inference of causation and, as scientists, be quite happy that we had established a causal connection. In fact, this is the project about which several of the contributors to Does Consciousness Cause Behavior? (Marc Jeannerod, Richard Passingham and Hakwan Lau, Suparna Choudhury and Sarah-Jayne Blakemore) give masterful reports (using measures of consciousness other than brain-o-vision). So what's the problem? Why is the issue so vexing that this book and many others have taken up the question? Certainly, one snag is that we don't yet have brain-o-vision. But that's not the full story. There is a key sidetrack on the way to establishing this causal inference that has left philosophers and scientists in a muddle for years.
The problem is that we each have our own personal brain-o-vision shimmering and blaring in our heads all day long. We have our own consciousness, and we find its images mesmerizing. The picture that our minds produce shows what looks exactly like a causal relationship: I thought of drinking the coffee and then I did it. This apparent relationship anchors our intuition about the conscious causation of behavior so deeply that it is difficult to understand that this causal inference is something that ought to be a scientific matter, not an intuitive one. We can't turn off the inner television and try to figure out what really happened. Each of the volume's contributors struggles to find some rapprochement between the personal experience of conscious causation and the possibility that consciousness might not cause behavior--leaving the experience an illusion.
An occasional undercurrent in the volume is the idea that exceptions to the standard inner experience of conscious causation should be discarded as uninformative. For example, Libet's classic finding (4) that brain activation precedes the reported conscious experience of willing action is often cited as evidence that consciousness is not the initial cause of behavior, and that it instead occurs in a chain of events initiated by brain events. Several contributors examine this finding in creative ways--but, curiously, others belittle the finding as a laboratory-bound oddity. The dismissal of exceptional cases extends to some chapters that question the value of examining any unusual lapses of conscious causation--such as those in hypnosis, facilitated communication, schizophrenia, or psychogenic movement disorders or in automatisms such as dowsing and table-turning. These anomalous cases sometimes reveal that the experience of conscious causation can diverge from the actual causal circumstances surrounding behavior. We need to understand such cases to establish when it is that consciousness thinks it is causing behavior. Exploring a phenomenon by studying its boundaries is a standard operating procedure of science, and it is curious that some students of mind would wish such informative exceptions swept under the rug.
Research into conscious causation is complicated by the fact that the scientists and philosophers studying the problem are people. Our own personal brain-o-vision leads us to idealize apparent conscious causation and disparage exceptions. We may not be able to turn off our own consciousness and consider the question dispassionately, but it probably would help.
References and Notes
1. Thanks to D. Dennett for this idea.
2. N. Kanwisher, J. McDermott, M. M. Chun, J. Neurosci. 17, 4302 (1997).
3. R. Q. Quiroga, L. Reddy, G. Kreiman, C. Koch, I. Fried, Nature 435, 1102 (2005).
4. B. Libet, Behav. Brain Sci. 8, 529 (1985).