I recently came across an important paper that I didn't spot when it first appeared in 2013.
Tononi and collaborators question the common assumption that knowing everything about the micro level of neurons and their interactions can completely specify the behavior of macro levels of systems of neurons. They use simple simulated systems, including neural-like ones, to show quantitatively that the macro level can causally supersede the micro level, i.e., causal emergence can occur. In practical terms this means, for example, that interactions between nodes in the salience (attentional) or default brain networks that
generate our behaviors might be better understood at their level than by knowing everything about the zillions of synapses that compose them. (This perspective is an antidote to the
purists who maintain that macro level studies are putting the cart before the horse...not getting us there if we haven't painstakingly worked out the steps of what goes on between ion channels, nerve cells, simple systems of nerve cells, and increasingly higher levels of integration.) My primitive mathematical abilities preclude me from really understanding their computations except in the most general way, I want to pass on their abstract, which is a dense but clear read:
Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis.