...the argument here is that there is no subjective impression [in the way we commonly suppose]; there is only information in a data-processing device. When we look at a red apple, the brain computes information about color. It also computes information about the self and about a (physically incoherent) property of subjective experience. The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing. Cognition is captive to those internal models. Such a brain would inescapably conclude it has subjective experience.
I concede that this approach is counterintuitive. One reason is that it seems to leave a gap in the logic: Why would the brain waste energy computing information about subjective awareness and attributing that property to itself, if the brain doesn’t in fact have this property?
This is where my own work comes in. In my lab at Princeton, my colleagues and I have been developing the “attention schema” theory of consciousness, which may explain why that computation is useful and would evolve in any complex brain. Here’s the gist of it:
Take again the case of color and wavelength. Wavelength is a real, physical phenomenon; color is the brain’s approximate, slightly incorrect model of it. In the attention schema theory, attention is the physical phenomenon and awareness is the brain’s approximate, slightly incorrect model of it. In neuroscience, attention is a process of enhancing some signals at the expense of others. It’s a way of focusing resources. Attention: a real, mechanistic phenomenon that can be programmed into a computer chip. Awareness: a cartoonish reconstruction of attention that is as physically inaccurate as the brain’s internal model of color.
In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it.
One reason that the brain needs an approximate model of attention is that to be able to control something efficiently, a system needs at least a rough model of the thing to be controlled. Another reason is that to predict the behavior of other creatures, the brain needs to model their brain states, including their attention. This theory pulls together evidence from social neuroscience, attention research, control theory and elsewhere.
Almost all other theories of consciousness are rooted in our intuitions about awareness. Like the intuition that white light is pure [when it is in fact a spectrum of all colors], our intuitions about awareness come from information computed deep in the brain. But the brain computes models that are caricatures of real things. And as with color, so with consciousness: It’s best to be skeptical of intuition.The letters in response to the above take Graziano to task for "explaining consciousness away" when what he is trying to do is not discount our experience of awareness or selfhood, but make a description of the physical process that actually constitutes them, a description that is counter-intuitive, but I think more likely to be correct than anything I've see thus far.