Tuesday, March 28, 2017

Termite castles, human minds, and Daniel Dennett.

After reading through Rothman’s New Yorker article on Daniel Dennett, I downloaded Dennett’s latest book, “From Bacteria to Bach and Back” to check out his bottom lines, which should be familiar to readers of MindBlog. (In the 1990’s, when I was teaching my Biology of Mind course at the Univ. of Wisconsin, I invited Dennett to give a lecture there.)  

I was surprised to find limited or no references to the work of major figures such Thomas Metzinger, Michael Graziano, Antonio Damasio, and others. The ideas in Chapter 14, “Consciousness as an Evolved User-Illusion” have been lucidly outlined earlier in Metzinger’s book “The Ego Tunnel,” and in Graziano’s “Consciousness and the Social Brain.”   (Academics striving to be the most prominent in their field are not known for noting the efforts of their competitors.)

The strongest sections in the book are his explanations of the work and ideas of others. I want to pass on a few chunks. The first is from Chapter 14:
,,,according to the arguments advanced by the ethologist and roboticist David McFarland (1989), “Communication is the only behavior that requires an organism to self-monitor its own control system.” Organisms can very effectively control themselves by a collection of competing but “myopic” task controllers, each activated by a condition (hunger or some other need, sensed opportunity, built-in priority ranking, and so on). When a controller’s condition outweighs the conditions of the currently active task controller, it interrupts it and takes charge temporarily. (The “pandemonium model” by Oliver Selfridge [1959] is the ancestor of many later models.) Goals are represented only tacitly, in the feedback loops that guide each task controller, but without any global or higher level representation. Evolution will tend to optimize the interrupt dynamics of these modules, and nobody’s the wiser. That is, there doesn’t have to be anybody home to be wiser! Communication, McFarland claims, is the behavioral innovation which changes all that. Communication requires a central clearing house of sorts in order to buffer the organism from revealing too much about its current state to competitive organisms. As Dawkins and Krebs (1978) showed, in order to understand the evolution of communication we need to see it as grounded in manipulation rather than as purely cooperative behavior. An organism that has no poker face, that “communicates state” directly to all hearers, is a sitting duck, and will soon be extinct (von Neumann and Morgenstern 1944). What must evolve to prevent this exposure is a private, proprietary communication-control buffer that creates opportunities for guided deception— and, coincidentally, opportunities for self-deception (Trivers 1985)— by creating, for the first time in the evolution of nervous systems, explicit and more globally accessible representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviors can be formulated and controlled without interfering with the control of other behaviors.
It is important to realize that by communication, McFarland does not mean specifically linguistic communication (which is ours alone), but strategic communication, which opens up the crucial space between one’s actual goals and intentions and the goals and intentions one attempts to communicate to an audience. There is no doubt that many species are genetically equipped with relatively simple communication behaviors (Hauser 1996), such as stotting, alarm calls, and territorial marking and defense. Stereotypical deception, such as bluffing in an aggressive encounter, is common, but a more productive and versatile talent for deception requires McFarland’s private workspace. For a century and more philosophers have stressed the “privacy” of our inner thoughts, but seldom have they bothered to ask why this is such a good design feature. (An occupational blindness of many philosophers: taking the manifest image as simply given and never asking what it might have been given to us for.)
The second chunk I pass on is from the very end of the book, describing Seabright’s ideas:
Seabright compares our civilization with a termite castle. Both are artifacts, marvels of ingenious design piled on ingenious design, towering over the supporting terrain, the work of vastly many individuals acting in concert. Both are thus by-products of the evolutionary processes that created and shaped those individuals, and in both cases, the design innovations that account for the remarkable resilience and efficiency observable were not the brain-children of individuals, but happy outcomes of the largely unwitting, myopic endeavors of those individuals, over many generations. But there are profound differences as well. Human cooperation is a delicate and remarkable phenomenon, quite unlike the almost mindless cooperation of termites, and indeed quite unprecedented in the natural world, a unique feature with a unique ancestry in evolution. It depends, as we have seen, on our ability to engage each other within the “space of reasons,” as Wilfrid Sellars put it. Cooperation depends, Seabright argues, on trust, a sort of almost invisible social glue that makes possible both great and terrible projects, and this trust is not, in fact, a “natural instinct” hard-wired by evolution into our brains. It is much too recent for that. Trust is a by-product of social conditions that are at once its enabling condition and its most important product. We have bootstrapped ourselves into the heady altitudes of modern civilization, and our natural emotions and other instinctual responses do not always serve our new circumstances. Civilization is a work in progress, and we abandon our attempt to understand it at our peril. Think of the termite castle. We human observers can appreciate its excellence and its complexity in ways that are quite beyond the nervous systems of its inhabitants. We can also aspire to achieving a similarly Olympian perspective on our own artifactual world, a feat only human beings could imagine. If we don’t succeed, we risk dismantling our precious creations in spite of our best intentions. Evolution in two realms, genetic and cultural, has created in us the capacity to know ourselves. But in spite of several millennia of ever-expanding intelligent design, we still are just staying afloat in a flood of puzzles and problems, many of them created by our own efforts of comprehension, and there are dangers that could cut short our quest before we— or our descendants— can satisfy our ravenous curiosity.
And, from Dennett’s wrap-up summary of the book:
Returning to the puzzle about how brains made of billions of neurons without any top-down control system could ever develop into human-style minds, we explored the prospect of decentralized, distributed control by neurons equipped to fend for themselves, including as one possibility feral neurons, released from their previous role as docile, domesticated servants under the selection pressure created by a new environmental feature: cultural invaders. Words striving to reproduce, and other memes, would provoke adaptations, such as revisions in brain structure in coevolutionary response. Once cultural transmission was secured as the chief behavioral innovation of our species, it not only triggered important changes in neural architecture but also added novelty to the environment— in the form of thousands of Gibsonian affordances— that enriched the ontologies of human beings and provided in turn further selection pressure in favor of adaptations— thinking tools— for keeping track of all these new opportunities. Cultural evolution itself evolved away from undirected or “random” searches toward more effective design processes, foresighted and purposeful and dependent on the comprehension of agents: intelligent designers. For human comprehension, a huge array of thinking tools is required. Cultural evolution de-Darwinized itself with its own fruits. 
This vantage point lets us see the manifest image, in Wilfrid Sellars’s useful terminology, as a special kind of artifact, partly genetically designed and partly culturally designed, a particularly effective user-illusion for helping time-pressured organisms move adroitly through life, availing themselves of (over) simplifications that create an image of the world we live in that is somewhat in tension with the scientific image to which we must revert in order to explain the emergence of the manifest image. Here we encounter yet another revolutionary inversion of reasoning, in David Hume’s account of our knowledge of causation. We can then see human consciousness as a user-illusion, not rendered in the Cartesian Theater (which does not exist) but constituted by the representational activities of the brain coupled with the appropriate reactions to those activities (“ and then what happens?”). 
This closes the gap, the Cartesian wound, but only a sketch of this all-important unification is clear at this time. The sketch has enough detail, however, to reveal that human minds, however intelligent and comprehending, are not the most powerful imaginable cognitive systems, and our intelligent designers have now made dramatic progress in creating machine learning systems that use bottom-up processes to demonstrate once again the truth of Orgel’s Second Rule: Evolution is cleverer than you are. Once we appreciate the universality of the Darwinian perspective, we realize that our current state, both individually and as societies, is both imperfect and impermanent. We may well someday return the planet to our bacterial cousins and their modest, bottom-up styles of design improvement. Or we may continue to thrive, in an environment we have created with the help of artifacts that do most of the heavy cognitive lifting their own way, in an age of post-intelligent design. There is not just coevolution between memes and genes; there is codependence between our minds’ top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains. And if our future follows the trajectory of our past— something that is partly in our control— our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.
The above excerpts are from: Dennett, Daniel C. (2017-02-07). From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 6819-6840). W. W. Norton & Company. Kindle Edition.

No comments:

Post a Comment