Read this in Peter Watts's most recent novel:
A neuron didn't know whether it fired in response to a scent or a symphony. Brain cells weren't intelligent; only brains were. And brain cells weren't even the lower limit. The origins of thought were buried so deep they predated multicellular life itself: neurotransmitters and choanoflagellates, potassium ion gates in Monosiga.
I am a colony of microbes talking to itself, Brüks reflected.
Now, I quote this passage for a very specific reason.
In our previous discussions, Dak has dismissed emergent complexity because of its mystical appearance; that is, because of its apparent reliance on something that cannot be proven to actually be there (emergent complexity itself). You have said that large-scale phenomena cannot be considered separately from their component parts, thus reducing every potential emergent phenomenon to the actors involved - namely, human actors.
However, if the urgent logical tendency is to attempt to redefine large-scale complexity as the mere appearance of individual actors, then it makes no sense to privilege conscious human beings as the primary source of energy in this case.
As Watts suggests, consciousness cannot be explained without recourse to neurons; but neurons themselves cannot be conscious. A single neuron doesn't understand English, for instance. However, it is certainly true that without neurons we would not have consciousness; so we agree that it is important to reduce the phenomenon to its components (at least to a degree).
However, in the scenario described above, consciousness would be the emergent phenomenon "explained away" via the reduction of brain processes to neurons. Yet we need conscious human agents in order to explain supra-complex phenomena such as linguistic communication, or traffic patterns, or aesthetics, etc. Without conscious human actors, we have nothing to reduce these "higher level" (for lack of a better term) phenomena to. So, for simplicity's sake, we will take three examples (neurons, conscious humans, and traffic patterns) and plot them on the following scale:
Neurons--------------Conscious humans---------------Traffic patterns
We cannot explain consciousness without neurons, certainly; but we cannot reduce consciousness to neuronal interaction because consciousness must be retained if we are to explain traffic patterns. We could reduce traffic patterns to neurons, but this rejects the existence of conscious human actors altogether (and ignores the possibility that neurons might be considered merely another example of complexity at a different scale).
My point is this: we must preserve the argument for emergent phenomena at least for one point of the system (i.e. the abstract possibility space we are considering - e.g. neurons, consciousness, traffic patterns) at all times. Consciousness itself can only be described as an emergent phenomenon of neuronal activity, and this emergent phenomenon is required in order to reduce (if we so choose) higher-level complexity down to the interactions between human agents. Like Heisenberg's uncertainty principle,
perhaps emergence will always imply a level or plane whose existence must be perceived as an emergent phenomenon in order to study complexity at various other levels. Just as we cannot know both the velocity and mass of a particle at the same moment, it would appear (I claim) that when assessing the complexity of various scales, certain levels must be taken for granted (i.e. understood in their emergent sense).